diff --git a/README.md b/README.md
index 87354dad..85cf88ce 100644
--- a/README.md
+++ b/README.md
@@ -16,223 +16,149 @@
**cua** ("koo-ah") is Docker for [Computer-Use Agents](https://www.oneusefulthing.org/p/when-you-give-a-claude-a-mouse) - it enables AI agents to control full operating systems in virtual containers and deploy them locally or to the cloud.
-
+
-
-Check out more demos of the Computer-Use Agent in action
-
-
-MCP Server: Work with Claude Desktop and Tableau
-
-
-
-
-
+With the Computer SDK, you can:
+- automate Windows, Linux, and macOS VMs with a consistent, [pyautogui-like API](https://docs.trycua.com/docs/libraries/computer#interface-actions)
+- create & manage VMs [locally](https://docs.trycua.com/docs/computer-sdk/computers#cua-local-containers) or using [cua cloud](https://www.trycua.com/)
-
-AI-Gradio: Multi-app workflow with browser, VS Code and terminal
-
-
-
-
-
+With the Agent SDK, you can:
+- run computer-use models with a [consistent output](https://docs.trycua.com/docs/agent-sdk/chat-history#message-array-structure)
+- run composed agents using UI grounding models and any LLM
+- use any liteLLM provider (`openai/`, `openrouter/`, etc.) or our included local providers (`huggingface-local/`, `mlx/`)
+- quickly evaluate new UI agent models and UI grounding models
+ - `anthropic/claude-opus-4-1-20250805` (using [Computer-Use Models](https://docs.trycua.com/docs/agent-sdk/supported-agents/computer-use-agents))
+ - `openai/computer-use-preview`
+ - `openrouter/z-ai/glm-4.5v`
+ - `huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B`
+ - `omniparser+{any LLM}` (using [Composed Agents](https://docs.trycua.com/docs/agent-sdk/supported-agents/composed-agents))
+ - `huggingface-local/HelloKKMe/GTA1-7B+{any LLM}`
+ - `huggingface/HelloKKMe/GTA1-32B+{any LLM}`
+ - `vllm_hosted/HelloKKMe/GTA1-72B+{any LLM}`
+ - `human/human` (using [Human-in-the-Loop](https://docs.trycua.com/docs/agent-sdk/supported-agents/human-in-the-loop))
+- benchmark on OSWorld-Verified, SheetBench-V2, and more [with a single line of code using HUD](https://docs.trycua.com/docs/agent-sdk/integrations/hud) ([Notebook](https://github.com/trycua/cua/blob/main/notebooks/eval_osworld.ipynb))
-
-Notebook: Fix GitHub issue in Cursor
-
-
-
-
-
-
-
-# 🚀 Quick Start with a Computer-Use Agent UI
-
-**Need to automate desktop tasks? Launch the Computer-Use Agent UI with a single command.**
-
-### Option 1: Fully-managed install with Docker (recommended)
-
-*Docker-based guided install for quick use*
-
-**macOS/Linux/Windows (via WSL):**
-
-```bash
-# Requires Docker
-/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/scripts/playground-docker.sh)"
-```
-
-This script will guide you through setup using Docker containers and launch the Computer-Use Agent UI.
-
----
-
-### Option 2: [Dev Container](./.devcontainer/README.md)
-
-*Best for contributors and development*
-
-This repository includes a [Dev Container](./.devcontainer/README.md) configuration that simplifies setup to a few steps:
-
-1. **Install the Dev Containers extension ([VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) or [WindSurf](https://docs.windsurf.com/windsurf/advanced#dev-containers-beta))**
-2. **Open the repository in the Dev Container:**
- - Press `Ctrl+Shift+P` (or `⌘+Shift+P` on macOS)
- - Select `Dev Containers: Clone Repository in Container Volume...` and paste the repository URL: `https://github.com/trycua/cua.git` (if not cloned) or `Dev Containers: Open Folder in Container...` (if git cloned).
- > **Note**: On WindSurf, the post install hook might not run automatically. If so, run `/bin/bash .devcontainer/post-install.sh` manually.
-3. **Open the VS Code workspace:** Once the post-install.sh is done running, open the `.vscode/py.code-workspace` workspace and press 
-.
-4. **Run the Agent UI example:** Click 
- to start the Gradio UI. If prompted to install **debugpy (Python Debugger)** to enable remote debugging, select 'Yes' to proceed.
-5. **Access the Gradio UI:** The Gradio UI will be available at `http://localhost:7860` and will automatically forward to your host machine.
-
----
-
-### Option 3: PyPI
-
-*Direct Python package installation*
-
-```bash
-# conda create -yn cua python==3.12
-
-pip install -U "cua-computer[all]" "cua-agent[all]"
-python -m agent.ui # Start the agent UI
-```
-
-Or check out the [Usage Guide](#-usage-guide) to learn how to use our Python SDK in your own code.
-
----
-
-## Supported [Agent Loops](https://github.com/trycua/cua/blob/main/libs/python/agent/README.md#agent-loops)
-
-- [UITARS-1.5](https://github.com/trycua/cua/blob/main/libs/python/agent/README.md#agent-loops) - Run locally on Apple Silicon with MLX, or use cloud providers
-- [OpenAI CUA](https://github.com/trycua/cua/blob/main/libs/python/agent/README.md#agent-loops) - Use OpenAI's Computer-Use Preview model
-- [Anthropic CUA](https://github.com/trycua/cua/blob/main/libs/python/agent/README.md#agent-loops) - Use Anthropic's Computer-Use capabilities
-- [OmniParser-v2.0](https://github.com/trycua/cua/blob/main/libs/python/agent/README.md#agent-loops) - Control UI with [Set-of-Marks prompting](https://som-gpt4v.github.io/) using any vision model
-
-## 🖥️ Compatibility
-
-For detailed compatibility information including host OS support, VM emulation capabilities, and model provider compatibility, see the [Compatibility Matrix](./COMPATIBILITY.md).
+Missing a model? [Raise a feature request](https://github.com/trycua/cua/issues/new?assignees=&labels=enhancement&projects=&title=%5BAgent%5D%3A+Add+model+support+for+) or [contribute](https://github.com/trycua/cua/blob/main/CONTRIBUTING.md)!
+
+# Quick Start
+
+- [Get started with a Computer-Use Agent UI](https://docs.trycua.com/docs/quickstart-ui)
+- [Get started with the Computer-Use Agent CLI](https://docs.trycua.com/docs/quickstart-cli)
+- [Get Started with the Python SDKs](https://docs.trycua.com/docs/quickstart-devs)
+
-# 🐍 Usage Guide
-
-Follow these steps to use Cua in your own Python code. See [Developer Guide](./docs/Developer-Guide.md) for building from source.
-
-### Step 1: Install Lume CLI
+# Usage ([Docs](https://docs.trycua.com/docs))
```bash
-/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh)"
+pip install cua-agent[all]
+```
+```python
+from agent import ComputerAgent
+
+agent = ComputerAgent(
+ model="anthropic/claude-3-5-sonnet-20241022",
+ tools=[computer],
+ max_trajectory_budget=5.0
+)
+
+messages = [{"role": "user", "content": "Take a screenshot and tell me what you see"}]
+
+async for result in agent.run(messages):
+ for item in result["output"]:
+ if item["type"] == "message":
+ print(item["content"][0]["text"])
```
-Lume CLI manages high-performance macOS/Linux VMs with near-native speed on Apple Silicon.
+### Output format (OpenAI Agent Responses Format):
+```json
+{
+ "output": [
+ # user input
+ {
+ "role": "user",
+ "content": "go to trycua on gh"
+ },
+ # first agent turn adds the model output to the history
+ {
+ "summary": [
+ {
+ "text": "Searching Firefox for Trycua GitHub",
+ "type": "summary_text"
+ }
+ ],
+ "type": "reasoning"
+ },
+ {
+ "action": {
+ "text": "Trycua GitHub",
+ "type": "type"
+ },
+ "call_id": "call_QI6OsYkXxl6Ww1KvyJc4LKKq",
+ "status": "completed",
+ "type": "computer_call"
+ },
+ # second agent turn adds the computer output to the history
+ {
+ "type": "computer_call_output",
+ "call_id": "call_QI6OsYkXxl6Ww1KvyJc4LKKq",
+ "output": {
+ "type": "input_image",
+ "image_url": "data:image/png;base64,..."
+ }
+ },
+ # final agent turn adds the agent output text to the history
+ {
+ "type": "message",
+ "role": "assistant",
+ "content": [
+ {
+ "text": "Success! The Trycua GitHub page has been opened.",
+ "type": "output_text"
+ }
+ ]
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 150,
+ "completion_tokens": 75,
+ "total_tokens": 225,
+ "response_cost": 0.01,
+ }
+}
+```
-### Step 2: Pull the macOS CUA Image
+# Computer ([Docs](https://docs.trycua.com/docs/computer-sdk/computers))
```bash
-lume pull macos-sequoia-cua:latest
+pip install cua-computer[all]
```
-
-The macOS CUA image contains the default Mac apps and the Computer Server for easy automation.
-
-### Step 3: Install Python SDK
-
-```bash
-pip install "cua-computer[all]" "cua-agent[all]"
-```
-
-### Step 4: Use in Your Code
-
```python
from computer import Computer
-from agent import ComputerAgent, LLM
-async def main():
- # Start a local macOS VM
- computer = Computer(os_type="macos")
- await computer.run()
+async with Computer(
+ os_type="linux",
+ provider_type="cloud",
+ name="your-container-name",
+ api_key="your-api-key"
+) as computer:
+ # Take screenshot
+ screenshot = await computer.interface.screenshot()
- # Or with Cua Cloud Container
- computer = Computer(
- os_type="linux",
- api_key="your_cua_api_key_here",
- name="your_container_name_here"
- )
-
- # Example: Direct control of a macOS VM with Computer
- computer.interface.delay = 0.1 # Wait 0.1 seconds between kb/m actions
- await computer.interface.left_click(100, 200)
- await computer.interface.type_text("Hello, world!")
- screenshot_bytes = await computer.interface.screenshot()
-
- # Example: Create and run an agent locally using mlx-community/UI-TARS-1.5-7B-6bit
- agent = ComputerAgent(
- model="mlx/mlx-community/UI-TARS-1.5-7B-6bit",
- tools=[computer],
- )
- async for result in agent.run("Find the trycua/cua repository on GitHub and follow the quick start guide"):
- print(result)
-
-if __name__ == "__main__":
- asyncio.run(main())
+ # Click and type
+ await computer.interface.left_click(100, 100)
+ await computer.interface.type("Hello!")
```
-For ready-to-use examples, check out our [Notebooks](./notebooks/) collection.
-
-### Lume CLI Reference
-
-```bash
-# Install Lume CLI and background service
-curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh | bash
-
-# List all VMs
-lume ls
-
-# Pull a VM image
-lume pull macos-sequoia-cua:latest
-
-# Create a new VM
-lume create my-vm --os macos --cpu 4 --memory 8GB --disk-size 50GB
-
-# Run a VM (creates and starts if it doesn't exist)
-lume run macos-sequoia-cua:latest
-
-# Stop a VM
-lume stop macos-sequoia-cua_latest
-
-# Delete a VM
-lume delete macos-sequoia-cua_latest
-```
-
-### Lumier CLI Reference
-
-For advanced container-like virtualization, check out [Lumier](./libs/lumier/README.md) - a Docker interface for macOS and Linux VMs.
-
-```bash
-# Install Lume CLI and background service
-curl -fsSL https://raw.githubusercontent.com/trycua/cua/main/libs/lume/scripts/install.sh | bash
-
-# Run macOS in a Docker container
-docker run -it --rm \
- --name lumier-vm \
- -p 8006:8006 \
- -v $(pwd)/storage:/storage \
- -v $(pwd)/shared:/shared \
- -e VM_NAME=lumier-vm \
- -e VERSION=ghcr.io/trycua/macos-sequoia-cua:latest \
- -e CPU_CORES=4 \
- -e RAM_SIZE=8192 \
- -e HOST_STORAGE_PATH=$(pwd)/storage \
- -e HOST_SHARED_PATH=$(pwd)/shared \
- trycua/lumier:latest
-```
-
-## Resources
+# Resources
- [How to use the MCP Server with Claude Desktop or other MCP clients](./libs/python/mcp-server/README.md) - One of the easiest ways to get started with Cua
- [How to use OpenAI Computer-Use, Anthropic, OmniParser, or UI-TARS for your Computer-Use Agent](./libs/python/agent/README.md)
- [How to use Lume CLI for managing desktops](./libs/lume/README.md)
- [Training Computer-Use Models: Collecting Human Trajectories with Cua (Part 1)](https://www.trycua.com/blog/training-computer-use-models-trajectories-1)
-- [Build Your Own Operator on macOS (Part 1)](https://www.trycua.com/blog/build-your-own-operator-on-macos-1)
## Modules
@@ -249,112 +175,6 @@ docker run -it --rm \
| [**Core (Python)**](./libs/python/core/README.md) | Python Core utilities | `pip install cua-core` |
| [**Core (Typescript)**](./libs/typescript/core/README.md) | Typescript Core utilities | `npm install @trycua/core` |
-## Computer Interface Reference
-
-For complete examples, see [computer_examples.py](./examples/computer_examples.py) or [computer_nb.ipynb](./notebooks/computer_nb.ipynb)
-
-```python
-# Shell Actions
-result = await computer.interface.run_command(cmd) # Run shell command
-# result.stdout, result.stderr, result.returncode
-
-# Mouse Actions
-await computer.interface.left_click(x, y) # Left click at coordinates
-await computer.interface.right_click(x, y) # Right click at coordinates
-await computer.interface.double_click(x, y) # Double click at coordinates
-await computer.interface.move_cursor(x, y) # Move cursor to coordinates
-await computer.interface.drag_to(x, y, duration) # Drag to coordinates
-await computer.interface.get_cursor_position() # Get current cursor position
-await computer.interface.mouse_down(x, y, button="left") # Press and hold a mouse button
-await computer.interface.mouse_up(x, y, button="left") # Release a mouse button
-
-# Keyboard Actions
-await computer.interface.type_text("Hello") # Type text
-await computer.interface.press_key("enter") # Press a single key
-await computer.interface.hotkey("command", "c") # Press key combination
-await computer.interface.key_down("command") # Press and hold a key
-await computer.interface.key_up("command") # Release a key
-
-# Scrolling Actions
-await computer.interface.scroll(x, y) # Scroll the mouse wheel
-await computer.interface.scroll_down(clicks) # Scroll down
-await computer.interface.scroll_up(clicks) # Scroll up
-
-# Screen Actions
-await computer.interface.screenshot() # Take a screenshot
-await computer.interface.get_screen_size() # Get screen dimensions
-
-# Clipboard Actions
-await computer.interface.set_clipboard(text) # Set clipboard content
-await computer.interface.copy_to_clipboard() # Get clipboard content
-
-# File System Operations
-await computer.interface.file_exists(path) # Check if file exists
-await computer.interface.directory_exists(path) # Check if directory exists
-await computer.interface.read_text(path, encoding="utf-8") # Read file content
-await computer.interface.write_text(path, content, encoding="utf-8") # Write file content
-await computer.interface.read_bytes(path) # Read file content as bytes
-await computer.interface.write_bytes(path, content) # Write file content as bytes
-await computer.interface.delete_file(path) # Delete file
-await computer.interface.create_dir(path) # Create directory
-await computer.interface.delete_dir(path) # Delete directory
-await computer.interface.list_dir(path) # List directory contents
-
-# Accessibility
-await computer.interface.get_accessibility_tree() # Get accessibility tree
-
-# Delay Configuration
-# Set default delay between all actions (in seconds)
-computer.interface.delay = 0.5 # 500ms delay between actions
-
-# Or specify delay for individual actions
-await computer.interface.left_click(x, y, delay=1.0) # 1 second delay after click
-await computer.interface.type_text("Hello", delay=0.2) # 200ms delay after typing
-await computer.interface.press_key("enter", delay=0.5) # 500ms delay after key press
-
-# Python Virtual Environment Operations
-await computer.venv_install("demo_venv", ["requests", "macos-pyxa"]) # Install packages in a virtual environment
-await computer.venv_cmd("demo_venv", "python -c 'import requests; print(requests.get(`https://httpbin.org/ip`).json())'") # Run a shell command in a virtual environment
-await computer.venv_exec("demo_venv", python_function_or_code, *args, **kwargs) # Run a Python function in a virtual environment and return the result / raise an exception
-
-# Example: Use sandboxed functions to execute code in a Cua Container
-from computer.helpers import sandboxed
-
-@sandboxed("demo_venv")
-def greet_and_print(name):
- """Get the HTML of the current Safari tab"""
- import PyXA
- safari = PyXA.Application("Safari")
- html = safari.current_document.source()
- print(f"Hello from inside the container, {name}!")
- return {"greeted": name, "safari_html": html}
-
-# When a @sandboxed function is called, it will execute in the container
-result = await greet_and_print("Cua")
-# Result: {"greeted": "Cua", "safari_html": "..."}
-# stdout and stderr are also captured and printed / raised
-print("Result from sandboxed function:", result)
-```
-
-## ComputerAgent Reference
-
-For complete examples, see [agent_examples.py](./examples/agent_examples.py) or [agent_nb.ipynb](./notebooks/agent_nb.ipynb)
-
-```python
-# Import necessary components
-from agent import ComputerAgent
-
-# UI-TARS-1.5 agent for local execution with MLX
-ComputerAgent(model="mlx/mlx-community/UI-TARS-1.5-7B-6bit")
-# OpenAI Computer-Use agent using OPENAI_API_KEY
-ComputerAgent(model="computer-use-preview")
-# Anthropic Claude agent using ANTHROPIC_API_KEY
-ComputerAgent(model="anthropic/claude-3-5-sonnet-20240620")
-
-# OmniParser loop for UI control using Set-of-Marks (SOM) prompting and any vision LLM
-ComputerAgent(model="omniparser+ollama_chat/gemma3:12b-it-q4_K_M")
-```
-
## Community
Join our [Discord community](https://discord.com/invite/mVnXXpdE85) to discuss ideas, get assistance, or share your demos!
@@ -409,4 +229,4 @@ Thank you to all our supporters!
-
\ No newline at end of file
+
diff --git a/docs/content/docs/agent-sdk/agent-loops.mdx b/docs/content/docs/agent-sdk/agent-loops.mdx
index bc26cf26..0be4e009 100644
--- a/docs/content/docs/agent-sdk/agent-loops.mdx
+++ b/docs/content/docs/agent-sdk/agent-loops.mdx
@@ -29,11 +29,4 @@ async for result in agent.run(prompt):
print("Agent:", result["output"][-1]["content"][0]["text"])
```
-We currently support 4 computer-using agent loops:
-
-- Anthropic CUAs
-- OpenAI CUA Preview
-- UI-TARS 1.5
-- Omniparser + LLMs
-
-For a full list of supported models and configurations, see the [Supported Agents](./supported-agents) page.
+For a list of supported models and configurations, see the [Supported Agents](./supported-agents/computer-use-agents) page.
diff --git a/docs/content/docs/agent-sdk/benchmarks/index.mdx b/docs/content/docs/agent-sdk/benchmarks/index.mdx
new file mode 100644
index 00000000..59e9b7ad
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/index.mdx
@@ -0,0 +1,28 @@
+---
+title: Benchmarks
+description: Computer Agent SDK benchmarks for agentic GUI tasks
+---
+
+The benchmark system evaluates models on GUI grounding tasks, specifically agent loop success rate and click prediction accuracy. It supports both:
+- **Computer Agent SDK providers** (using model strings like `"huggingface-local/HelloKKMe/GTA1-7B"`)
+- **Reference agent implementations** (custom model classes implementing the `ModelProtocol`)
+
+## Available Benchmarks
+
+- **[ScreenSpot-v2](./screenspot-v2)** - Standard resolution GUI grounding
+- **[ScreenSpot-Pro](./screenspot-pro)** - High-resolution GUI grounding
+- **[Interactive Testing](./interactive)** - Real-time testing and visualization
+
+## Quick Start
+
+```bash
+# Clone the benchmark repository
+git clone https://github.com/trycua/cua
+cd libs/python/agent/benchmarks
+
+# Install dependencies
+pip install "cua-agent[all]"
+
+# Run a benchmark
+python ss-v2.py
+```
diff --git a/docs/content/docs/agent-sdk/benchmarks/interactive.mdx b/docs/content/docs/agent-sdk/benchmarks/interactive.mdx
new file mode 100644
index 00000000..43170ca4
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/interactive.mdx
@@ -0,0 +1,21 @@
+---
+title: Interactive Tool
+description: Real-time testing and visualization tool for GUI grounding models
+---
+
+This tool allows you to test multiple models interactively by providing natural language instructions. It automatically captures screenshots and tests all configured models sequentially, providing immediate feedback and visual results.
+
+## Usage
+
+```bash
+# Start the interactive tool
+cd libs/python/agent/benchmarks
+python interactive.py
+```
+
+## Commands
+
+- **Type instruction**: Screenshot + test all models
+- **`screenshot`**: Take screenshot without prediction
+- **`models`**: List available models
+- **`quit`/`exit`**: Exit the tool
diff --git a/docs/content/docs/agent-sdk/benchmarks/introduction.mdx b/docs/content/docs/agent-sdk/benchmarks/introduction.mdx
new file mode 100644
index 00000000..3f2251f8
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/introduction.mdx
@@ -0,0 +1,57 @@
+---
+title: Introduction
+description: Overview of benchmarking in the c/ua agent framework
+---
+
+The c/ua agent framework uses benchmarks to test the performance of supported models and providers at various agentic tasks.
+
+## Benchmark Types
+
+Computer-Agent benchmarks evaluate two key capabilities:
+- **Plan Generation**: Breaking down complex tasks into a sequence of actions
+- **Coordinate Generation**: Predicting precise click locations on GUI elements
+
+## Using State-of-the-Art Models
+
+Let's see how to use the SOTA vision-language models in the c/ua agent framework.
+
+### Plan Generation + Coordinate Generation
+
+**[OS-World](https://os-world.github.io/)** - Benchmark for complete computer-use agents
+
+This leaderboard tests models that can understand instructions and automatically perform the full sequence of actions needed to complete tasks.
+
+```python
+# UI-TARS-1.5 is a SOTA unified plan generation + coordinate generation VLM
+# This makes it suitable for agentic loops for computer-use
+agent = ComputerAgent("huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B", tools=[computer])
+agent.run("Open Firefox and go to github.com")
+# Success! 🎉
+```
+
+### Coordinate Generation Only
+
+**[GUI Agent Grounding Leaderboard](https://gui-agent.github.io/grounding-leaderboard/)** - Benchmark for click prediction accuracy
+
+This leaderboard tests models that specialize in finding exactly where to click on screen elements, but needs to be told what specific action to take.
+
+```python
+# GTA1-7B is a SOTA coordinate generation VLM
+# It can only generate coordinates, not plan:
+agent = ComputerAgent("huggingface-local/HelloKKMe/GTA1-7B", tools=[computer])
+agent.predict_click("find the button to open the settings") # (27, 450)
+# This will raise an error:
+# agent.run("Open Firefox and go to github.com")
+```
+
+### Composed Agent
+
+The c/ua agent framework also supports composed agents, which combine a planning model with a clicking model for the best of both worlds. Any liteLLM model can be used as the plan generation model.
+
+```python
+# It can be paired with any LLM to form a composed agent:
+# "gemini/gemini-1.5-pro" will be used as the plan generation LLM
+agent = ComputerAgent("huggingface-local/HelloKKMe/GTA1-7B+gemini/gemini-1.5-pro", tools=[computer])
+agent.run("Open Firefox and go to github.com")
+# Success! 🎉
+```
diff --git a/docs/content/docs/agent-sdk/benchmarks/meta.json b/docs/content/docs/agent-sdk/benchmarks/meta.json
new file mode 100644
index 00000000..3573a892
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/meta.json
@@ -0,0 +1,9 @@
+{
+ "pages": [
+ "introduction",
+ "screenspot-v2",
+ "screenspot-pro",
+ "interactive",
+ "osworld-verified"
+ ]
+}
\ No newline at end of file
diff --git a/docs/content/docs/agent-sdk/benchmarks/osworld-verified.mdx b/docs/content/docs/agent-sdk/benchmarks/osworld-verified.mdx
new file mode 100644
index 00000000..8d82b205
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/osworld-verified.mdx
@@ -0,0 +1,89 @@
+---
+title: OSWorld-Verified
+description: Benchmark ComputerAgent on OSWorld tasks using HUD
+---
+
+OSWorld-Verified is a curated subset of OSWorld tasks that can be run using the HUD framework. Use ComputerAgent with HUD to benchmark on these tasks.
+
+## Setup
+
+```bash
+pip install hud-python==0.2.10
+```
+
+Set environment variables:
+```bash
+export HUD_API_KEY="your_hud_key"
+export ANTHROPIC_API_KEY="your_anthropic_key" # For Claude
+export OPENAI_API_KEY="your_openai_key" # For OpenAI
+```
+
+## Quick Start
+
+```python
+import asyncio
+from hud import gym, load_taskset
+from agent.integrations.hud import ComputerAgent
+
+async def run_osworld():
+ # Load taskset
+ taskset = await load_taskset("OSWorld-Verified")
+ test = taskset[144] # Example task
+
+ # Create environment (~2.5 min startup)
+ env = await gym.make(test)
+
+ # Create agent
+ agent = ComputerAgent(
+ model="anthropic/claude-3-5-sonnet-20241022", # any ComputerAgent model string
+ environment="linux"
+ )
+
+ # Run benchmark
+ obs, _ = await env.reset()
+ for i in range(100):
+ action, done = await agent.predict(obs)
+ obs, reward, terminated, info = await env.step(action)
+ if done or terminated:
+ break
+
+ # Evaluate results
+ result = await env.evaluate()
+ await env.close()
+
+ return result
+
+# Run benchmark
+result = asyncio.run(run_osworld())
+print(f"Success: {result.get('success', False)}")
+```
+
+## Parallel Execution
+
+Run all tasks in parallel using `run_job`:
+
+```python
+from agent.integrations.hud import run_job
+from hud import load_taskset
+from hud.taskset import TaskSet
+import logging
+
+# Load taskset
+taskset = await load_taskset("OSWorld-Verified")
+taskset = TaskSet(tasks=taskset[:10]) # limit to 10 tasks instead of all 370
+
+# Run benchmark job
+job = await run_job(
+ model="openai/computer-use-preview",
+ task_or_taskset=taskset,
+ job_name="test-computeragent-job",
+ max_concurrent_tasks=5,
+ # add any extra ComputerAgent kwargs:
+ verbosity=logging.INFO, # Enable logging
+ # trajectory_dir=".." # Save trajectories locally
+)
+
+# Get results OR view them at app.hud.so
+print(await job.get_analytics())
+print(f"View results at: https://app.hud.so/jobs/{job.id}")
+```
\ No newline at end of file
diff --git a/docs/content/docs/agent-sdk/benchmarks/screenspot-pro.mdx b/docs/content/docs/agent-sdk/benchmarks/screenspot-pro.mdx
new file mode 100644
index 00000000..402b919e
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/screenspot-pro.mdx
@@ -0,0 +1,25 @@
+---
+title: ScreenSpot-Pro
+description: High-resolution GUI grounding benchmark
+---
+
+ScreenSpot-Pro is a benchmark for evaluating click prediction accuracy on high-resolution GUI screenshots with complex layouts.
+
+## Usage
+
+```bash
+# Run the benchmark
+cd libs/python/agent/benchmarks
+python ss-pro.py
+
+# Run with custom sample limit
+python ss-pro.py --samples 50
+```
+
+## Results
+
+| Model | Accuracy | Failure Rate | Samples |
+|-------|----------|--------------|---------|
+| Coming Soon | - | - | - |
+
+Results will be populated after running benchmarks with various models.
diff --git a/docs/content/docs/agent-sdk/benchmarks/screenspot-v2.mdx b/docs/content/docs/agent-sdk/benchmarks/screenspot-v2.mdx
new file mode 100644
index 00000000..6cfcf1c1
--- /dev/null
+++ b/docs/content/docs/agent-sdk/benchmarks/screenspot-v2.mdx
@@ -0,0 +1,25 @@
+---
+title: ScreenSpot-v2
+description: Standard resolution GUI grounding benchmark
+---
+
+ScreenSpot-v2 is a benchmark for evaluating click prediction accuracy on standard resolution GUI screenshots.
+
+## Usage
+
+```bash
+# Run the benchmark
+cd libs/python/agent/benchmarks
+python ss-v2.py
+
+# Run with custom sample limit
+python ss-v2.py --samples 100
+```
+
+## Results
+
+| Model | Accuracy | Failure Rate | Samples |
+|-------|----------|--------------|---------|
+| Coming Soon | - | - | - |
+
+Results will be populated after running benchmarks with various models.
diff --git a/docs/content/docs/agent-sdk/custom-computer-handlers.mdx b/docs/content/docs/agent-sdk/custom-computer-handlers.mdx
new file mode 100644
index 00000000..a5b05960
--- /dev/null
+++ b/docs/content/docs/agent-sdk/custom-computer-handlers.mdx
@@ -0,0 +1,130 @@
+---
+title: Custom Computers
+slug: custom-computer-handlers
+---
+
+The Agent SDK supports defining custom computer handlers using a simple dictionary interface. This enables integration with custom automation backends, testing frameworks, or specialized computer control systems.
+
+## Example: Defining a Custom Computer Handler
+
+```python
+import asyncio
+from PIL import Image
+
+# Define your custom computer functions
+async def take_screenshot():
+ """Your custom screenshot implementation"""
+ # Return PIL Image, bytes, or base64 string
+ return Image.new('RGB', (1920, 1080), color='white')
+
+# Create dict-based computer handler - only 'screenshot' is required
+custom_computer = {
+ 'screenshot': take_screenshot, # required
+
+ # everything below is optional
+ 'environment': 'linux', # linux, mac, windows, browser
+ 'dimensions': (1920, 1080), # (width, height)
+ 'click': lambda x, y, button: print(f"Clicking at ({x}, {y}) with {button} button"),
+}
+```
+
+You can then use this as a tool for your agent:
+
+```python
+from agent import ComputerAgent
+
+agent = ComputerAgent(
+ model="anthropic/claude-3-5-sonnet-20240620",
+ tools=[custom_computer],
+)
+
+# Agent will automatically convert dict to agent.computers.CustomComputerHandler
+await agent.run("Take a screenshot and click at coordinates 100, 200")
+```
+
+## Class-Based Implementation
+
+For more complex implementations, you can create a custom class by inheriting from `AsyncComputerHandler`:
+
+```python
+from agent.computers import AsyncComputerHandler
+from PIL import Image
+from typing import Literal, List, Dict, Union, Optional
+
+class MyCustomComputer(AsyncComputerHandler):
+ """Custom computer handler implementation."""
+
+ def __init__(self):
+ # Initialize your custom computer interface here
+ pass
+
+ # ==== Computer-Use-Preview Action Space ====
+
+ async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
+ """Get the current environment type."""
+ ...
+
+ async def get_dimensions(self) -> tuple[int, int]:
+ """Get screen dimensions as (width, height)."""
+ ...
+
+ async def screenshot(self) -> str:
+ """Take a screenshot and return as base64 string."""
+ ...
+
+ async def click(self, x: int, y: int, button: str = "left") -> None:
+ """Click at coordinates with specified button."""
+ ...
+
+ async def double_click(self, x: int, y: int) -> None:
+ """Double click at coordinates."""
+ ...
+
+ async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
+ """Scroll at coordinates with specified scroll amounts."""
+ ...
+
+ async def type(self, text: str) -> None:
+ """Type text."""
+ ...
+
+ async def wait(self, ms: int = 1000) -> None:
+ """Wait for specified milliseconds."""
+ ...
+
+ async def move(self, x: int, y: int) -> None:
+ """Move cursor to coordinates."""
+ ...
+
+ async def keypress(self, keys: Union[List[str], str]) -> None:
+ """Press key combination."""
+ ...
+
+ async def drag(self, path: List[Dict[str, int]]) -> None:
+ """Drag along specified path."""
+ ...
+
+ async def get_current_url(self) -> str:
+ """Get current URL (for browser environments)."""
+ ...
+
+ # ==== Anthropic Action Space ====
+
+ async def left_mouse_down(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse down at coordinates."""
+ ...
+
+ async def left_mouse_up(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse up at coordinates."""
+ ...
+
+# Use with agent
+custom_computer = MyCustomComputer()
+
+agent = ComputerAgent(
+ model="anthropic/claude-3-5-sonnet-20240620",
+ tools=[custom_computer],
+)
+
+await agent.run("Take a screenshot and click at coordinates 100, 200")
+```
\ No newline at end of file
diff --git a/docs/content/docs/agent-sdk/integrations/hud.mdx b/docs/content/docs/agent-sdk/integrations/hud.mdx
new file mode 100644
index 00000000..b517121e
--- /dev/null
+++ b/docs/content/docs/agent-sdk/integrations/hud.mdx
@@ -0,0 +1,49 @@
+---
+title: HUD Evals
+description: Use ComputerAgent with HUD for benchmarking and evaluation
+---
+
+The HUD integration allows you to use ComputerAgent with the [HUD benchmarking framework](https://www.hud.so/), providing the same interface as existing HUD agents while leveraging ComputerAgent's capabilities.
+
+## Installation
+
+```bash
+pip install "cua-agent[hud]"
+## or install hud-python directly
+# pip install hud-python==0.2.10
+```
+
+## Usage
+
+```python
+from agent.integrations.hud import run_job
+from hud import load_taskset
+from hud.taskset import TaskSet
+import logging
+
+# Load taskset
+taskset = await load_taskset("OSWorld-Verified")
+taskset = TaskSet(tasks=taskset[:10]) # limit to 10 tasks instead of all 370
+
+# Run benchmark job
+job = await run_job(
+ model="openai/computer-use-preview",
+ # model="anthropic/claude-3-5-sonnet-20241022",
+ # model="huggingface-local/HelloKKMe/GTA1-7B+openai/gpt-5",
+ task_or_taskset=taskset,
+ job_name="test-computeragent-job",
+ max_concurrent_tasks=5,
+ # add any extra ComputerAgent kwargs:
+ verbosity=logging.INFO, # Enable logging
+ # trajectory_dir=".." # Save trajectories locally
+)
+
+# Get results OR view them at app.hud.so
+print(await job.get_analytics())
+print(f"View results at: https://app.hud.so/jobs/{job.id}")
+```
+
+**Available Benchmarks:**
+1. [OSWorld-Verified](/agent-sdk/benchmarks/osworld-verified) - Benchmark on OSWorld tasks
+
+See the [HUD docs](https://docs.hud.so/environment-creation) for more eval environments.
\ No newline at end of file
diff --git a/docs/content/docs/agent-sdk/integrations/meta.json b/docs/content/docs/agent-sdk/integrations/meta.json
new file mode 100644
index 00000000..7b7ebb81
--- /dev/null
+++ b/docs/content/docs/agent-sdk/integrations/meta.json
@@ -0,0 +1,4 @@
+{
+ "title": "Integrations",
+ "pages": ["hud"]
+}
diff --git a/docs/content/docs/agent-sdk/meta.json b/docs/content/docs/agent-sdk/meta.json
index 933452cb..5db33148 100644
--- a/docs/content/docs/agent-sdk/meta.json
+++ b/docs/content/docs/agent-sdk/meta.json
@@ -3,13 +3,16 @@
"description": "Build computer-using agents with the Agent SDK",
"pages": [
"agent-loops",
- "supported-agents",
+ "supported-agents",
"chat-history",
"callbacks",
"sandboxed-tools",
+ "custom-computer-handlers",
"local-models",
"prompt-caching",
"usage-tracking",
- "migration-guide"
+ "benchmarks",
+ "migration-guide",
+ "integrations"
]
}
diff --git a/docs/content/docs/agent-sdk/supported-agents.mdx b/docs/content/docs/agent-sdk/supported-agents.mdx
deleted file mode 100644
index 61abf521..00000000
--- a/docs/content/docs/agent-sdk/supported-agents.mdx
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Supported Agents
----
-
-This page lists all supported agent loops and their compatible models/configurations in cua.
-
-All agent loops are compatible with any LLM provider supported by LiteLLM.
-
-See [Running Models Locally](./local-models) for how to use Hugging Face and MLX models on your own machine.
-
-## Anthropic CUAs
-
-- Claude 4: `claude-opus-4-20250514`, `claude-sonnet-4-20250514`
-- Claude 3.7: `claude-3-7-sonnet-20250219`
-- Claude 3.5: `claude-3-5-sonnet-20240620`
-
-## OpenAI CUA Preview
-
-- Computer-use-preview: `computer-use-preview`
-
-## UI-TARS 1.5
-
-- `huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B`
-- `huggingface/ByteDance-Seed/UI-TARS-1.5-7B` (requires TGI endpoint)
-
-## Omniparser + LLMs
-
-- `omniparser+vertex_ai/gemini-pro`
-- `omniparser+openai/gpt-4o`
-- Any LiteLLM-compatible model combined with Omniparser
-
----
-
-For details on agent loop behavior and usage, see [Agent Loops](./agent-loops).
diff --git a/docs/content/docs/agent-sdk/supported-agents/composed-agents.mdx b/docs/content/docs/agent-sdk/supported-agents/composed-agents.mdx
new file mode 100644
index 00000000..8040d2e5
--- /dev/null
+++ b/docs/content/docs/agent-sdk/supported-agents/composed-agents.mdx
@@ -0,0 +1,106 @@
+---
+title: Composed Agents
+description: Combine grounding models with any LLM for computer-use capabilities
+---
+
+Composed agents combine the best of both worlds: specialized grounding models for precise click prediction and powerful LLMs for task planning and reasoning.
+
+Use the format `"grounding_model+thinking_model"` to create a composed agent with any vision-enabled LiteLLM-compatible model.
+
+## How Composed Agents Work
+
+1. **Planning Phase**: The thinking model (LLM) analyzes the task and decides what actions to take (e.g., `click("find the login button")`, `type("username")`)
+2. **Grounding Phase**: The grounding model converts element descriptions to precise coordinates
+3. **Execution**: Actions are performed using the predicted coordinates
+
+## Supported Grounding Models
+
+Any model that supports `predict_click()` can be used as the grounding component:
+
+- `omniparser` (OSS set-of-marks model)
+- `huggingface-local/HelloKKMe/GTA1-7B` (OSS grounding model)
+- `huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B` (OSS unified model)
+- `claude-3-5-sonnet-20241022` (Anthropic CUA)
+- `openai/computer-use-preview` (OpenAI CUA)
+
+## Supported Thinking Models
+
+Any vision-enabled LiteLLM-compatible model can be used as the thinking component:
+
+- **Anthropic**: `anthropic/claude-3-5-sonnet-20241022`, `anthropic/claude-3-opus-20240229`
+- **OpenAI**: `openai/gpt-5`, `openai/gpt-o3`, `openai/gpt-4o`
+- **Google**: `gemini/gemini-1.5-pro`, `vertex_ai/gemini-pro-vision`
+- **Local models**: Any Hugging Face vision-language model
+
+## Usage Examples
+
+### GTA1 + GPT-5
+
+Use Google's Gemini for planning with specialized grounding:
+
+```python
+agent = ComputerAgent(
+ "huggingface-local/HelloKKMe/GTA1-7B+openai/gpt-5",
+ tools=[computer]
+)
+
+async for _ in agent.run("Take a screenshot, analyze the UI, and click on the most prominent button"):
+ pass
+```
+
+### GTA1 + Claude 3.5 Sonnet
+
+Combine state-of-the-art grounding with powerful reasoning:
+
+```python
+agent = ComputerAgent(
+ "huggingface-local/HelloKKMe/GTA1-7B+anthropic/claude-3-5-sonnet-20241022",
+ tools=[computer]
+)
+
+async for _ in agent.run("Open Firefox, navigate to github.com, and search for 'computer-use'"):
+ pass
+# Success! 🎉
+# - Claude 3.5 Sonnet plans the sequence of actions
+# - GTA1-7B provides precise click coordinates for each UI element
+```
+
+### UI-TARS + GPT-4o
+
+Combine two different vision models for enhanced capabilities:
+
+```python
+agent = ComputerAgent(
+ "huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B+openai/gpt-4o",
+ tools=[computer]
+)
+
+async for _ in agent.run("Help me fill out this form with my personal information"):
+ pass
+```
+
+## Benefits of Composed Agents
+
+- **Specialized Grounding**: Use models optimized for click prediction accuracy
+- **Flexible Planning**: Choose any LLM for task reasoning and planning
+- **Cost Optimization**: Use smaller grounding models with larger planning models only when needed
+- **Performance**: Leverage the strengths of different model architectures
+
+## Capabilities
+
+Composed agents support both capabilities:
+
+```python
+agent = ComputerAgent("huggingface-local/HelloKKMe/GTA1-7B+anthropic/claude-3-5-sonnet-20241022")
+
+# Full computer-use agent capabilities
+async for _ in agent.run("Complete this online form"):
+ pass
+
+# Direct click prediction (uses grounding model only)
+coords = agent.predict_click("find the submit button")
+```
+
+---
+
+For more information on individual model capabilities, see [Computer-Use Agents](./computer-use-agents) and [Grounding Models](./grounding-models).
diff --git a/docs/content/docs/agent-sdk/supported-agents/computer-use-agents.mdx b/docs/content/docs/agent-sdk/supported-agents/computer-use-agents.mdx
new file mode 100644
index 00000000..7aeab043
--- /dev/null
+++ b/docs/content/docs/agent-sdk/supported-agents/computer-use-agents.mdx
@@ -0,0 +1,67 @@
+---
+title: Computer-Use Models
+description: Models that support full computer-use agent capabilities with ComputerAgent.run()
+---
+
+These models support complete computer-use agent functionality through `ComputerAgent.run()`. They can understand natural language instructions and autonomously perform sequences of actions to complete tasks.
+
+All agent loops are compatible with any LLM provider supported by LiteLLM.
+
+See [Running Models Locally](../local-models) for how to use Hugging Face and MLX models on your own machine.
+
+## Anthropic CUAs
+
+Claude models with computer-use capabilities:
+
+- Claude 4.1: `claude-opus-4-1-20250805`
+- Claude 4: `claude-opus-4-20250514`, `claude-sonnet-4-20250514`
+- Claude 3.7: `claude-3-7-sonnet-20250219`
+- Claude 3.5: `claude-3-5-sonnet-20240620`
+
+```python
+agent = ComputerAgent("claude-3-5-sonnet-20241022", tools=[computer])
+async for _ in agent.run("Open Firefox and navigate to github.com"):
+ pass
+```
+
+## OpenAI CUA Preview
+
+OpenAI's computer-use preview model:
+
+- Computer-use-preview: `computer-use-preview`
+
+```python
+agent = ComputerAgent("openai/computer-use-preview", tools=[computer])
+async for _ in agent.run("Take a screenshot and describe what you see"):
+ pass
+```
+
+## UI-TARS 1.5
+
+Unified vision-language model for computer-use:
+
+- `huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B`
+- `huggingface/ByteDance-Seed/UI-TARS-1.5-7B` (requires TGI endpoint)
+
+```python
+agent = ComputerAgent("huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B", tools=[computer])
+async for _ in agent.run("Open the settings menu and change the theme to dark mode"):
+ pass
+```
+
+## GLM-4.5V
+
+Zhipu AI's GLM-4.5V vision-language model with computer-use capabilities:
+
+- `openrouter/z-ai/glm-4.5v`
+- `huggingface-local/zai-org/GLM-4.5V`
+
+```python
+agent = ComputerAgent("openrouter/z-ai/glm-4.5v", tools=[computer])
+async for _ in agent.run("Click on the search bar and type 'hello world'"):
+ pass
+```
+
+---
+
+For details on agent loop behavior and usage, see [Agent Loops](../agent-loops).
diff --git a/docs/content/docs/agent-sdk/supported-agents/grounding-models.mdx b/docs/content/docs/agent-sdk/supported-agents/grounding-models.mdx
new file mode 100644
index 00000000..61c9a70b
--- /dev/null
+++ b/docs/content/docs/agent-sdk/supported-agents/grounding-models.mdx
@@ -0,0 +1,89 @@
+---
+title: Grounding Models
+description: Models that support click prediction with ComputerAgent.predict_click()
+---
+
+These models specialize in UI element grounding and click prediction. They can identify precise coordinates for UI elements based on natural language descriptions, but cannot perform autonomous task planning.
+
+Use `ComputerAgent.predict_click()` to get coordinates for specific UI elements.
+
+## All Computer-Use Agents
+
+All models that support `ComputerAgent.run()` also support `ComputerAgent.predict_click()`:
+
+### Anthropic CUAs
+
+- Claude 4.1: `claude-opus-4-1-20250805`
+- Claude 4: `claude-opus-4-20250514`, `claude-sonnet-4-20250514`
+- Claude 3.7: `claude-3-7-sonnet-20250219`
+- Claude 3.5: `claude-3-5-sonnet-20240620`
+
+### OpenAI CUA Preview
+- Computer-use-preview: `computer-use-preview`
+
+### UI-TARS 1.5
+- `huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B`
+- `huggingface/ByteDance-Seed/UI-TARS-1.5-7B` (requires TGI endpoint)
+
+## Specialized Grounding Models
+
+These models are optimized specifically for click prediction and UI element grounding:
+
+### OmniParser
+
+OCR-focused set-of-marks model that requires an LLM for click prediction:
+
+- `omniparser` (requires combination with any LiteLLM vision model)
+
+### GTA1-7B
+
+State-of-the-art grounding model from the [GUI Agent Grounding Leaderboard](https://gui-agent.github.io/grounding-leaderboard/):
+
+- `huggingface-local/HelloKKMe/GTA1-7B`
+
+## Usage Examples
+
+```python
+# Using any grounding model for click prediction
+agent = ComputerAgent("claude-3-5-sonnet-20241022", tools=[computer])
+
+# Predict coordinates for specific elements
+login_coords = agent.predict_click("find the login button")
+search_coords = agent.predict_click("locate the search text field")
+menu_coords = agent.predict_click("find the hamburger menu icon")
+
+print(f"Login button: {login_coords}")
+print(f"Search field: {search_coords}")
+print(f"Menu icon: {menu_coords}")
+```
+
+```python
+# OmniParser is just for OCR, so it requires an LLM for predict_click
+agent = ComputerAgent("omniparser+anthropic/claude-3-5-sonnet-20241022", tools=[computer])
+
+# Predict click coordinates using composed agent
+coords = agent.predict_click("find the submit button")
+print(f"Click coordinates: {coords}") # (450, 320)
+
+# Note: Cannot use omniparser alone for click prediction
+# This will raise an error:
+# agent = ComputerAgent("omniparser", tools=[computer])
+# coords = agent.predict_click("find button") # Error!
+```
+
+```python
+agent = ComputerAgent("huggingface-local/HelloKKMe/GTA1-7B", tools=[computer])
+
+# Predict click coordinates for UI elements
+coords = agent.predict_click("find the submit button")
+print(f"Click coordinates: {coords}") # (450, 320)
+
+# Note: GTA1 cannot perform autonomous task planning
+# This will raise an error:
+# agent.run("Fill out the form and submit it")
+```
+
+
+---
+
+For information on combining grounding models with planning capabilities, see [Composed Agents](./composed-agents).
diff --git a/docs/content/docs/agent-sdk/supported-agents/human-in-the-loop.mdx b/docs/content/docs/agent-sdk/supported-agents/human-in-the-loop.mdx
new file mode 100644
index 00000000..8d084d7e
--- /dev/null
+++ b/docs/content/docs/agent-sdk/supported-agents/human-in-the-loop.mdx
@@ -0,0 +1,66 @@
+---
+title: Human-In-The-Loop
+description: Use humans as agents for evaluation, demonstrations, and interactive control
+---
+
+The Agent SDK provides a human tool, with native support for using a human-in-the-loop as a way to evaluate your environment, tools, or to create demonstrations. You can use it by doing `grounding_model+human/human` or `human/human` directly.
+
+## Getting Started
+
+To start the human agent tool, simply run:
+
+```bash
+python -m agent.human_tool
+```
+
+The UI will show you pending completions. Select a completion to take control of the agent.
+
+## Usage Examples
+
+### Direct Human Agent
+
+```python
+from agent import ComputerAgent
+from agent.computer import computer
+
+agent = ComputerAgent(
+ "human/human",
+ tools=[computer]
+)
+
+async for _ in agent.run("Take a screenshot, analyze the UI, and click on the most prominent button"):
+ pass
+```
+
+### Composed with Grounding Model
+
+```python
+agent = ComputerAgent(
+ "huggingface-local/HelloKKMe/GTA1-7B+human/human",
+ tools=[computer]
+)
+
+async for _ in agent.run("Navigate to the settings page and enable dark mode"):
+ pass
+```
+
+## Features
+
+The human-in-the-loop interface provides:
+
+- **Interactive UI**: Web-based interface for reviewing and responding to agent requests
+- **Image Display**: Screenshots with click handlers for direct interaction
+- **Action Accordions**: Support for various computer actions (click, type, keypress, etc.)
+- **Tool Calls**: Full OpenAI-compatible tool call support
+- **Real-time Updates**: Smart polling for responsive UI updates
+
+## Use Cases
+
+- **Evaluation**: Have humans evaluate agent performance and provide ground truth responses
+- **Demonstrations**: Create training data by having humans demonstrate tasks
+- **Interactive Control**: Take manual control when automated agents need human guidance
+- **Testing**: Validate agent, tool, and environment behavior manually
+
+---
+
+For more details on the human tool implementation, see the [Human Tool Documentation](../../tools/human-tool).
diff --git a/docs/content/docs/agent-sdk/supported-agents/meta.json b/docs/content/docs/agent-sdk/supported-agents/meta.json
new file mode 100644
index 00000000..5d50b124
--- /dev/null
+++ b/docs/content/docs/agent-sdk/supported-agents/meta.json
@@ -0,0 +1,10 @@
+{
+ "title": "Supported Agents",
+ "description": "Models and configurations supported by the Agent SDK",
+ "pages": [
+ "computer-use-agents",
+ "grounding-models",
+ "composed-agents",
+ "human-in-the-loop"
+ ]
+}
diff --git a/docs/content/docs/quickstart-cli.mdx b/docs/content/docs/quickstart-cli.mdx
index 84aa80ae..ac11c726 100644
--- a/docs/content/docs/quickstart-cli.mdx
+++ b/docs/content/docs/quickstart-cli.mdx
@@ -169,18 +169,20 @@ python -m agent.cli openai/computer-use-preview
```bash
-uv run --with "cua-agent[cli]" -m agent.cli anthropic/claude-3-5-sonnet-20241022
uv run --with "cua-agent[cli]" -m agent.cli anthropic/claude-opus-4-20250514
+uv run --with "cua-agent[cli]" -m agent.cli anthropic/claude-opus-4-1-20250805
uv run --with "cua-agent[cli]" -m agent.cli anthropic/claude-sonnet-4-20250514
+uv run --with "cua-agent[cli]" -m agent.cli anthropic/claude-3-5-sonnet-20241022
```
```bash
-python -m agent.cli anthropic/claude-3-5-sonnet-20241022
+python -m agent.cli anthropic/claude-opus-4-1-20250805
python -m agent.cli anthropic/claude-opus-4-20250514
python -m agent.cli anthropic/claude-sonnet-4-20250514
+python -m agent.cli anthropic/claude-3-5-sonnet-20241022
```
diff --git a/examples/agent_ui_examples.py b/examples/agent_ui_examples.py
index d5a37119..97f54856 100644
--- a/examples/agent_ui_examples.py
+++ b/examples/agent_ui_examples.py
@@ -13,7 +13,7 @@ from utils import load_dotenv_files
load_dotenv_files()
# Import the create_gradio_ui function
-from agent.ui.gradio.app import create_gradio_ui
+from agent.ui.gradio.ui_components import create_gradio_ui
if __name__ == "__main__":
print("Launching Computer-Use Agent Gradio UI with advanced features...")
diff --git a/libs/python/agent/README.md b/libs/python/agent/README.md
index 0c5595e1..f34692db 100644
--- a/libs/python/agent/README.md
+++ b/libs/python/agent/README.md
@@ -37,6 +37,7 @@ pip install "cua-agent[omni]" # Omniparser + any LLM support
pip install "cua-agent[uitars]" # UI-TARS
pip install "cua-agent[uitars-mlx]" # UI-TARS + MLX support
pip install "cua-agent[uitars-hf]" # UI-TARS + Huggingface support
+pip install "cua-agent[glm45v-hf]" # GLM-4.5V + Huggingface support
pip install "cua-agent[ui]" # Gradio UI support
```
diff --git a/libs/python/agent/agent/__init__.py b/libs/python/agent/agent/__init__.py
index 6797dab6..08d782d3 100644
--- a/libs/python/agent/agent/__init__.py
+++ b/libs/python/agent/agent/__init__.py
@@ -5,7 +5,7 @@ agent - Decorator-based Computer Use Agent with liteLLM integration
import logging
import sys
-from .decorators import agent_loop
+from .decorators import register_agent
from .agent import ComputerAgent
from .types import Messages, AgentResponse
@@ -13,7 +13,7 @@ from .types import Messages, AgentResponse
from . import loops
__all__ = [
- "agent_loop",
+ "register_agent",
"ComputerAgent",
"Messages",
"AgentResponse"
diff --git a/libs/python/agent/agent/adapters/__init__.py b/libs/python/agent/agent/adapters/__init__.py
index 2d9abbe3..3a5c0301 100644
--- a/libs/python/agent/agent/adapters/__init__.py
+++ b/libs/python/agent/agent/adapters/__init__.py
@@ -3,7 +3,9 @@ Adapters package for agent - Custom LLM adapters for LiteLLM
"""
from .huggingfacelocal_adapter import HuggingFaceLocalAdapter
+from .human_adapter import HumanAdapter
__all__ = [
"HuggingFaceLocalAdapter",
+ "HumanAdapter",
]
diff --git a/libs/python/agent/agent/adapters/huggingfacelocal_adapter.py b/libs/python/agent/agent/adapters/huggingfacelocal_adapter.py
index f8706868..46d72db3 100644
--- a/libs/python/agent/agent/adapters/huggingfacelocal_adapter.py
+++ b/libs/python/agent/agent/adapters/huggingfacelocal_adapter.py
@@ -1,5 +1,7 @@
import asyncio
+import functools
import warnings
+from concurrent.futures import ThreadPoolExecutor
from typing import Iterator, AsyncIterator, Dict, List, Any, Optional
from litellm.types.utils import GenericStreamingChunk, ModelResponse
from litellm.llms.custom_llm import CustomLLM
@@ -8,7 +10,7 @@ from litellm import completion, acompletion
# Try to import HuggingFace dependencies
try:
import torch
- from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
+ from transformers import AutoModelForImageTextToText, AutoProcessor
HF_AVAILABLE = True
except ImportError:
HF_AVAILABLE = False
@@ -28,6 +30,7 @@ class HuggingFaceLocalAdapter(CustomLLM):
self.device = device
self.models = {} # Cache for loaded models
self.processors = {} # Cache for loaded processors
+ self._executor = ThreadPoolExecutor(max_workers=1) # Single thread pool
def _load_model_and_processor(self, model_name: str):
"""Load model and processor if not already cached.
@@ -40,7 +43,7 @@ class HuggingFaceLocalAdapter(CustomLLM):
"""
if model_name not in self.models:
# Load model
- model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
+ model = AutoModelForImageTextToText.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map=self.device,
@@ -48,7 +51,12 @@ class HuggingFaceLocalAdapter(CustomLLM):
)
# Load processor
- processor = AutoProcessor.from_pretrained(model_name)
+ processor = AutoProcessor.from_pretrained(
+ model_name,
+ min_pixels=3136,
+ max_pixels=4096 * 2160,
+ device_map=self.device
+ )
# Cache them
self.models[model_name] = model
@@ -141,8 +149,7 @@ class HuggingFaceLocalAdapter(CustomLLM):
)
# Move inputs to the same device as model
- if torch.cuda.is_available() and self.device != "cpu":
- inputs = inputs.to("cuda")
+ inputs = inputs.to(model.device)
# Generate response
with torch.no_grad():
@@ -182,7 +189,11 @@ class HuggingFaceLocalAdapter(CustomLLM):
ModelResponse with generated text
"""
# Run _generate in thread pool to avoid blocking
- generated_text = await asyncio.to_thread(self._generate, **kwargs)
+ loop = asyncio.get_event_loop()
+ generated_text = await loop.run_in_executor(
+ self._executor,
+ functools.partial(self._generate, **kwargs)
+ )
return await acompletion(
model=f"huggingface-local/{kwargs['model']}",
@@ -215,7 +226,11 @@ class HuggingFaceLocalAdapter(CustomLLM):
AsyncIterator of GenericStreamingChunk
"""
# Run _generate in thread pool to avoid blocking
- generated_text = await asyncio.to_thread(self._generate, **kwargs)
+ loop = asyncio.get_event_loop()
+ generated_text = await loop.run_in_executor(
+ self._executor,
+ functools.partial(self._generate, **kwargs)
+ )
generic_streaming_chunk: GenericStreamingChunk = {
"finish_reason": "stop",
diff --git a/libs/python/agent/agent/adapters/human_adapter.py b/libs/python/agent/agent/adapters/human_adapter.py
new file mode 100644
index 00000000..0cd4fe02
--- /dev/null
+++ b/libs/python/agent/agent/adapters/human_adapter.py
@@ -0,0 +1,348 @@
+import os
+import asyncio
+import requests
+from typing import List, Dict, Any, Iterator, AsyncIterator
+from litellm.types.utils import GenericStreamingChunk, ModelResponse
+from litellm.llms.custom_llm import CustomLLM
+from litellm import completion, acompletion
+
+
+class HumanAdapter(CustomLLM):
+ """Human Adapter for human-in-the-loop completions.
+
+ This adapter sends completion requests to a human completion server
+ where humans can review and respond to AI requests.
+ """
+
+ def __init__(self, base_url: str | None = None, timeout: float = 300.0, **kwargs):
+ """Initialize the human adapter.
+
+ Args:
+ base_url: Base URL for the human completion server.
+ Defaults to HUMAN_BASE_URL environment variable or http://localhost:8002
+ timeout: Timeout in seconds for waiting for human response
+ **kwargs: Additional arguments
+ """
+ super().__init__()
+ self.base_url = base_url or os.getenv('HUMAN_BASE_URL', 'http://localhost:8002')
+ self.timeout = timeout
+
+ # Ensure base_url doesn't end with slash
+ self.base_url = self.base_url.rstrip('/')
+
+ def _queue_completion(self, messages: List[Dict[str, Any]], model: str) -> str:
+ """Queue a completion request and return the call ID.
+
+ Args:
+ messages: Messages in OpenAI format
+ model: Model name
+
+ Returns:
+ Call ID for tracking the request
+
+ Raises:
+ Exception: If queueing fails
+ """
+ try:
+ response = requests.post(
+ f"{self.base_url}/queue",
+ json={"messages": messages, "model": model},
+ timeout=10
+ )
+ response.raise_for_status()
+ return response.json()["id"]
+ except requests.RequestException as e:
+ raise Exception(f"Failed to queue completion request: {e}")
+
+ def _wait_for_completion(self, call_id: str) -> Dict[str, Any]:
+ """Wait for human to complete the call.
+
+ Args:
+ call_id: ID of the queued completion call
+
+ Returns:
+ Dict containing response and/or tool_calls
+
+ Raises:
+ TimeoutError: If timeout is exceeded
+ Exception: If completion fails
+ """
+ import time
+
+ start_time = time.time()
+
+ while True:
+ try:
+ # Check status
+ status_response = requests.get(f"{self.base_url}/status/{call_id}")
+ status_response.raise_for_status()
+ status_data = status_response.json()
+
+ if status_data["status"] == "completed":
+ result = {}
+ if "response" in status_data and status_data["response"]:
+ result["response"] = status_data["response"]
+ if "tool_calls" in status_data and status_data["tool_calls"]:
+ result["tool_calls"] = status_data["tool_calls"]
+ return result
+ elif status_data["status"] == "failed":
+ error_msg = status_data.get("error", "Unknown error")
+ raise Exception(f"Completion failed: {error_msg}")
+
+ # Check timeout
+ if time.time() - start_time > self.timeout:
+ raise TimeoutError(f"Timeout waiting for human response after {self.timeout} seconds")
+
+ # Wait before checking again
+ time.sleep(1.0)
+
+ except requests.RequestException as e:
+ if time.time() - start_time > self.timeout:
+ raise TimeoutError(f"Timeout waiting for human response: {e}")
+ # Continue trying if we haven't timed out
+ time.sleep(1.0)
+
+ async def _async_wait_for_completion(self, call_id: str) -> Dict[str, Any]:
+ """Async version of wait_for_completion.
+
+ Args:
+ call_id: ID of the queued completion call
+
+ Returns:
+ Dict containing response and/or tool_calls
+
+ Raises:
+ TimeoutError: If timeout is exceeded
+ Exception: If completion fails
+ """
+ import aiohttp
+ import time
+
+ start_time = time.time()
+
+ async with aiohttp.ClientSession() as session:
+ while True:
+ try:
+ # Check status
+ async with session.get(f"{self.base_url}/status/{call_id}") as response:
+ response.raise_for_status()
+ status_data = await response.json()
+
+ if status_data["status"] == "completed":
+ result = {}
+ if "response" in status_data and status_data["response"]:
+ result["response"] = status_data["response"]
+ if "tool_calls" in status_data and status_data["tool_calls"]:
+ result["tool_calls"] = status_data["tool_calls"]
+ return result
+ elif status_data["status"] == "failed":
+ error_msg = status_data.get("error", "Unknown error")
+ raise Exception(f"Completion failed: {error_msg}")
+
+ # Check timeout
+ if time.time() - start_time > self.timeout:
+ raise TimeoutError(f"Timeout waiting for human response after {self.timeout} seconds")
+
+ # Wait before checking again
+ await asyncio.sleep(1.0)
+
+ except Exception as e:
+ if time.time() - start_time > self.timeout:
+ raise TimeoutError(f"Timeout waiting for human response: {e}")
+ # Continue trying if we haven't timed out
+ await asyncio.sleep(1.0)
+
+ def _generate_response(self, messages: List[Dict[str, Any]], model: str) -> Dict[str, Any]:
+ """Generate a human response for the given messages.
+
+ Args:
+ messages: Messages in OpenAI format
+ model: Model name
+
+ Returns:
+ Dict containing response and/or tool_calls
+ """
+ # Queue the completion request
+ call_id = self._queue_completion(messages, model)
+
+ # Wait for human response
+ response = self._wait_for_completion(call_id)
+
+ return response
+
+ async def _async_generate_response(self, messages: List[Dict[str, Any]], model: str) -> Dict[str, Any]:
+ """Async version of _generate_response.
+
+ Args:
+ messages: Messages in OpenAI format
+ model: Model name
+
+ Returns:
+ Dict containing response and/or tool_calls
+ """
+ # Queue the completion request (sync operation)
+ call_id = self._queue_completion(messages, model)
+
+ # Wait for human response (async)
+ response = await self._async_wait_for_completion(call_id)
+
+ return response
+
+ def completion(self, *args, **kwargs) -> ModelResponse:
+ """Synchronous completion method.
+
+ Returns:
+ ModelResponse with human-generated text or tool calls
+ """
+ messages = kwargs.get('messages', [])
+ model = kwargs.get('model', 'human')
+
+ # Generate human response
+ human_response_data = self._generate_response(messages, model)
+
+ # Create ModelResponse with proper structure
+ from litellm.types.utils import ModelResponse, Choices, Message
+ import uuid
+ import time
+
+ # Create message content based on response type
+ if "tool_calls" in human_response_data and human_response_data["tool_calls"]:
+ # Tool calls response
+ message = Message(
+ role="assistant",
+ content=human_response_data.get("response", ""),
+ tool_calls=human_response_data["tool_calls"]
+ )
+ else:
+ # Text response
+ message = Message(
+ role="assistant",
+ content=human_response_data.get("response", "")
+ )
+
+ choice = Choices(
+ finish_reason="stop",
+ index=0,
+ message=message
+ )
+
+ result = ModelResponse(
+ id=f"human-{uuid.uuid4()}",
+ choices=[choice],
+ created=int(time.time()),
+ model=f"human/{model}",
+ object="chat.completion"
+ )
+
+ return result
+
+ async def acompletion(self, *args, **kwargs) -> ModelResponse:
+ """Asynchronous completion method.
+
+ Returns:
+ ModelResponse with human-generated text or tool calls
+ """
+ messages = kwargs.get('messages', [])
+ model = kwargs.get('model', 'human')
+
+ # Generate human response
+ human_response_data = await self._async_generate_response(messages, model)
+
+ # Create ModelResponse with proper structure
+ from litellm.types.utils import ModelResponse, Choices, Message
+ import uuid
+ import time
+
+ # Create message content based on response type
+ if "tool_calls" in human_response_data and human_response_data["tool_calls"]:
+ # Tool calls response
+ message = Message(
+ role="assistant",
+ content=human_response_data.get("response", ""),
+ tool_calls=human_response_data["tool_calls"]
+ )
+ else:
+ # Text response
+ message = Message(
+ role="assistant",
+ content=human_response_data.get("response", "")
+ )
+
+ choice = Choices(
+ finish_reason="stop",
+ index=0,
+ message=message
+ )
+
+ result = ModelResponse(
+ id=f"human-{uuid.uuid4()}",
+ choices=[choice],
+ created=int(time.time()),
+ model=f"human/{model}",
+ object="chat.completion"
+ )
+
+ return result
+
+ def streaming(self, *args, **kwargs) -> Iterator[GenericStreamingChunk]:
+ """Synchronous streaming method.
+
+ Yields:
+ Streaming chunks with human-generated text or tool calls
+ """
+ messages = kwargs.get('messages', [])
+ model = kwargs.get('model', 'human')
+
+ # Generate human response
+ human_response_data = self._generate_response(messages, model)
+
+ import time
+
+ # Handle tool calls vs text response
+ if "tool_calls" in human_response_data and human_response_data["tool_calls"]:
+ # Stream tool calls as a single chunk
+ generic_chunk: GenericStreamingChunk = {
+ "finish_reason": "tool_calls",
+ "index": 0,
+ "is_finished": True,
+ "text": human_response_data.get("response", ""),
+ "tool_use": human_response_data["tool_calls"],
+ "usage": {"completion_tokens": 1, "prompt_tokens": 0, "total_tokens": 1},
+ }
+ yield generic_chunk
+ else:
+ # Stream text response
+ response_text = human_response_data.get("response", "")
+ generic_chunk: GenericStreamingChunk = {
+ "finish_reason": "stop",
+ "index": 0,
+ "is_finished": True,
+ "text": response_text,
+ "tool_use": None,
+ "usage": {"completion_tokens": len(response_text.split()), "prompt_tokens": 0, "total_tokens": len(response_text.split())},
+ }
+ yield generic_chunk
+
+ async def astreaming(self, *args, **kwargs) -> AsyncIterator[GenericStreamingChunk]:
+ """Asynchronous streaming method.
+
+ Yields:
+ Streaming chunks with human-generated text or tool calls
+ """
+ messages = kwargs.get('messages', [])
+ model = kwargs.get('model', 'human')
+
+ # Generate human response
+ human_response = await self._async_generate_response(messages, model)
+
+ # Return as single streaming chunk
+ generic_streaming_chunk: GenericStreamingChunk = {
+ "finish_reason": "stop",
+ "index": 0,
+ "is_finished": True,
+ "text": human_response,
+ "tool_use": None,
+ "usage": {"completion_tokens": len(human_response.split()), "prompt_tokens": 0, "total_tokens": len(human_response.split())},
+ }
+
+ yield generic_streaming_chunk
\ No newline at end of file
diff --git a/libs/python/agent/agent/agent.py b/libs/python/agent/agent/agent.py
index 0b9f243a..7f30166f 100644
--- a/libs/python/agent/agent/agent.py
+++ b/libs/python/agent/agent/agent.py
@@ -3,18 +3,20 @@ ComputerAgent - Main agent class that selects and runs agent loops
"""
import asyncio
-from typing import Dict, List, Any, Optional, AsyncGenerator, Union, cast, Callable, Set
+from typing import Dict, List, Any, Optional, AsyncGenerator, Union, cast, Callable, Set, Tuple
from litellm.responses.utils import Usage
-from .types import Messages, Computer
-from .decorators import find_agent_loop
-from .computer_handler import OpenAIComputerHandler, acknowledge_safety_check_callback, check_blocklisted_url
+from .types import Messages, AgentCapability
+from .decorators import find_agent_config
import json
import litellm
import litellm.utils
import inspect
-from .adapters import HuggingFaceLocalAdapter
+from .adapters import (
+ HuggingFaceLocalAdapter,
+ HumanAdapter,
+)
from .callbacks import (
ImageRetentionCallback,
LoggingCallback,
@@ -22,9 +24,14 @@ from .callbacks import (
BudgetManagerCallback,
TelemetryCallback,
)
+from .computers import (
+ AsyncComputerHandler,
+ is_agent_computer,
+ make_computer_handler
+)
def get_json(obj: Any, max_depth: int = 10) -> Any:
- def custom_serializer(o: Any, depth: int = 0, seen: Set[int] = None) -> Any:
+ def custom_serializer(o: Any, depth: int = 0, seen: Optional[Set[int]] = None) -> Any:
if seen is None:
seen = set()
@@ -117,6 +124,13 @@ def sanitize_message(msg: Any) -> Any:
return sanitized
return msg
+def get_output_call_ids(messages: List[Dict[str, Any]]) -> List[str]:
+ call_ids = []
+ for message in messages:
+ if message.get("type") == "computer_call_output" or message.get("type") == "function_call_output":
+ call_ids.append(message.get("call_id"))
+ return call_ids
+
class ComputerAgent:
"""
Main agent class that automatically selects the appropriate agent loop
@@ -204,22 +218,26 @@ class ComputerAgent:
hf_adapter = HuggingFaceLocalAdapter(
device="auto"
)
+ human_adapter = HumanAdapter()
litellm.custom_provider_map = [
- {"provider": "huggingface-local", "custom_handler": hf_adapter}
+ {"provider": "huggingface-local", "custom_handler": hf_adapter},
+ {"provider": "human", "custom_handler": human_adapter}
]
+ litellm.suppress_debug_info = True
# == Initialize computer agent ==
# Find the appropriate agent loop
if custom_loop:
self.agent_loop = custom_loop
- self.agent_loop_info = None
+ self.agent_config_info = None
else:
- loop_info = find_agent_loop(model)
- if not loop_info:
- raise ValueError(f"No agent loop found for model: {model}")
- self.agent_loop = loop_info.func
- self.agent_loop_info = loop_info
+ config_info = find_agent_config(model)
+ if not config_info:
+ raise ValueError(f"No agent config found for model: {model}")
+ # Instantiate the agent config class
+ self.agent_loop = config_info.agent_class()
+ self.agent_config_info = config_info
self.tool_schemas = []
self.computer_handler = None
@@ -227,10 +245,6 @@ class ComputerAgent:
async def _initialize_computers(self):
"""Initialize computer objects"""
if not self.tool_schemas:
- for tool in self.tools:
- if hasattr(tool, '_initialized') and not tool._initialized:
- await tool.run()
-
# Process tools and create tool schemas
self.tool_schemas = self._process_tools()
@@ -238,7 +252,7 @@ class ComputerAgent:
computer_handler = None
for schema in self.tool_schemas:
if schema["type"] == "computer":
- computer_handler = OpenAIComputerHandler(schema["computer"].interface)
+ computer_handler = await make_computer_handler(schema["computer"])
break
self.computer_handler = computer_handler
@@ -254,7 +268,7 @@ class ComputerAgent:
for tool in self.tools:
# Check if it's a computer object (has interface attribute)
- if hasattr(tool, 'interface'):
+ if is_agent_computer(tool):
# This is a computer tool - will be handled by agent loop
schemas.append({
"type": "computer",
@@ -389,8 +403,10 @@ class ComputerAgent:
# AGENT OUTPUT PROCESSING
# ============================================================================
- async def _handle_item(self, item: Any, computer: Optional[Computer] = None) -> List[Dict[str, Any]]:
+ async def _handle_item(self, item: Any, computer: Optional[AsyncComputerHandler] = None, ignore_call_ids: Optional[List[str]] = None) -> List[Dict[str, Any]]:
"""Handle each item; may cause a computer action + screenshot."""
+ if ignore_call_ids and item.get("call_id") and item.get("call_id") in ignore_call_ids:
+ return []
item_type = item.get("type", None)
@@ -411,6 +427,9 @@ class ComputerAgent:
# Perform computer actions
action = item.get("action")
action_type = action.get("type")
+ if action_type is None:
+ print(f"Action type cannot be `None`: action={action}, action_type={action_type}")
+ return []
# Extract action arguments (all fields except 'type')
action_args = {k: v for k, v in action.items() if k != "type"}
@@ -436,10 +455,12 @@ class ComputerAgent:
acknowledged_checks = []
for check in pending_checks:
check_message = check.get("message", str(check))
- if acknowledge_safety_check_callback(check_message):
- acknowledged_checks.append(check)
- else:
- raise ValueError(f"Safety check failed: {check_message}")
+ acknowledged_checks.append(check)
+ # TODO: implement a callback for safety checks
+ # if acknowledge_safety_check_callback(check_message, allow_always=True):
+ # acknowledged_checks.append(check)
+ # else:
+ # raise ValueError(f"Safety check failed: {check_message}")
# Create call output
call_output = {
@@ -452,11 +473,12 @@ class ComputerAgent:
},
}
- # Additional URL safety checks for browser environments
- if await computer.get_environment() == "browser":
- current_url = await computer.get_current_url()
- call_output["output"]["current_url"] = current_url
- check_blocklisted_url(current_url)
+ # # Additional URL safety checks for browser environments
+ # if await computer.get_environment() == "browser":
+ # current_url = await computer.get_current_url()
+ # call_output["output"]["current_url"] = current_url
+ # # TODO: implement a callback for URL safety checks
+ # # check_blocklisted_url(current_url)
result = [call_output]
await self._on_computer_call_end(item, result)
@@ -511,6 +533,12 @@ class ComputerAgent:
Returns:
AsyncGenerator that yields response chunks
"""
+ if not self.agent_config_info:
+ raise ValueError("Agent configuration not found")
+
+ capabilities = self.get_capabilities()
+ if "step" not in capabilities:
+ raise ValueError(f"Agent loop {self.agent_config_info.agent_class.__name__} does not support step predictions")
await self._initialize_computers()
@@ -525,7 +553,7 @@ class ComputerAgent:
"messages": messages,
"stream": stream,
"model": self.model,
- "agent_loop": self.agent_loop.__name__,
+ "agent_loop": self.agent_config_info.agent_class.__name__,
**merged_kwargs
}
await self._on_run_start(run_kwargs, old_items)
@@ -555,7 +583,7 @@ class ComputerAgent:
}
# Run agent loop iteration
- result = await self.agent_loop(
+ result = await self.agent_loop.predict_step(
**loop_kwargs,
_on_api_start=self._on_api_start,
_on_api_end=self._on_api_end,
@@ -576,9 +604,12 @@ class ComputerAgent:
# Add agent response to new_items
new_items += result.get("output")
+ # Get output call ids
+ output_call_ids = get_output_call_ids(result.get("output", []))
+
# Handle computer actions
for item in result.get("output"):
- partial_items = await self._handle_item(item, self.computer_handler)
+ partial_items = await self._handle_item(item, self.computer_handler, ignore_call_ids=output_call_ids)
new_items += partial_items
# Yield partial response
@@ -591,4 +622,51 @@ class ComputerAgent:
)
}
- await self._on_run_end(loop_kwargs, old_items, new_items)
\ No newline at end of file
+ await self._on_run_end(loop_kwargs, old_items, new_items)
+
+ async def predict_click(
+ self,
+ instruction: str,
+ image_b64: Optional[str] = None
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates based on image and instruction.
+
+ Args:
+ instruction: Instruction for where to click
+ image_b64: Base64 encoded image (optional, will take screenshot if not provided)
+
+ Returns:
+ None or tuple with (x, y) coordinates
+ """
+ if not self.agent_config_info:
+ raise ValueError("Agent configuration not found")
+
+ capabilities = self.get_capabilities()
+ if "click" not in capabilities:
+ raise ValueError(f"Agent loop {self.agent_config_info.agent_class.__name__} does not support click predictions")
+ if hasattr(self.agent_loop, 'predict_click'):
+ if not image_b64:
+ if not self.computer_handler:
+ raise ValueError("Computer tool or image_b64 is required for predict_click")
+ image_b64 = await self.computer_handler.screenshot()
+ return await self.agent_loop.predict_click(
+ model=self.model,
+ image_b64=image_b64,
+ instruction=instruction
+ )
+ return None
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """
+ Get list of capabilities supported by the current agent config.
+
+ Returns:
+ List of capability strings (e.g., ["step", "click"])
+ """
+ if not self.agent_config_info:
+ raise ValueError("Agent configuration not found")
+
+ if hasattr(self.agent_loop, 'get_capabilities'):
+ return self.agent_loop.get_capabilities()
+ return ["step"] # Default capability
\ No newline at end of file
diff --git a/libs/python/agent/agent/callbacks/pii_anonymization.py b/libs/python/agent/agent/callbacks/pii_anonymization.py
index f5c31a61..68f4b2fc 100644
--- a/libs/python/agent/agent/callbacks/pii_anonymization.py
+++ b/libs/python/agent/agent/callbacks/pii_anonymization.py
@@ -9,10 +9,7 @@ import io
import logging
try:
- from presidio_analyzer import AnalyzerEngine
- from presidio_anonymizer import AnonymizerEngine, DeanonymizeEngine
- from presidio_anonymizer.entities import RecognizerResult, OperatorConfig
- from presidio_image_redactor import ImageRedactorEngine
+ # TODO: Add Presidio dependencies
from PIL import Image
PRESIDIO_AVAILABLE = True
except ImportError:
@@ -32,11 +29,7 @@ class PIIAnonymizationCallback(AsyncCallbackHandler):
def __init__(
self,
- anonymize_text: bool = True,
- anonymize_images: bool = True,
- entities_to_anonymize: Optional[List[str]] = None,
- anonymization_operator: str = "replace",
- image_redaction_color: Tuple[int, int, int] = (255, 192, 203) # Pink
+ # TODO: Any extra kwargs if needed
):
"""
Initialize the PII anonymization callback.
@@ -51,23 +44,10 @@ class PIIAnonymizationCallback(AsyncCallbackHandler):
if not PRESIDIO_AVAILABLE:
raise ImportError(
"Presidio is not available. Install with: "
- "pip install presidio-analyzer presidio-anonymizer presidio-image-redactor"
+ "pip install cua-agent[pii-anonymization]"
)
- self.anonymize_text = anonymize_text
- self.anonymize_images = anonymize_images
- self.entities_to_anonymize = entities_to_anonymize
- self.anonymization_operator = anonymization_operator
- self.image_redaction_color = image_redaction_color
-
- # Initialize Presidio engines
- self.analyzer = AnalyzerEngine()
- self.anonymizer = AnonymizerEngine()
- self.deanonymizer = DeanonymizeEngine()
- self.image_redactor = ImageRedactorEngine()
-
- # Store anonymization mappings for deanonymization
- self.anonymization_mappings: Dict[str, Any] = {}
+ # TODO: Implement __init__
async def on_llm_start(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
@@ -79,9 +59,6 @@ class PIIAnonymizationCallback(AsyncCallbackHandler):
Returns:
List of messages with PII anonymized
"""
- if not self.anonymize_text and not self.anonymize_images:
- return messages
-
anonymized_messages = []
for msg in messages:
anonymized_msg = await self._anonymize_message(msg)
@@ -99,9 +76,6 @@ class PIIAnonymizationCallback(AsyncCallbackHandler):
Returns:
List of output with PII deanonymized for tool calls
"""
- if not self.anonymize_text:
- return output
-
deanonymized_output = []
for item in output:
# Only deanonymize tool calls and computer_call messages
@@ -114,146 +88,9 @@ class PIIAnonymizationCallback(AsyncCallbackHandler):
return deanonymized_output
async def _anonymize_message(self, message: Dict[str, Any]) -> Dict[str, Any]:
- """Anonymize PII in a single message."""
- msg_copy = message.copy()
-
- # Anonymize text content
- if self.anonymize_text:
- msg_copy = await self._anonymize_text_content(msg_copy)
-
- # Redact images in computer_call_output
- if self.anonymize_images and msg_copy.get("type") == "computer_call_output":
- msg_copy = await self._redact_image_content(msg_copy)
-
- return msg_copy
-
- async def _anonymize_text_content(self, message: Dict[str, Any]) -> Dict[str, Any]:
- """Anonymize text content in a message."""
- msg_copy = message.copy()
-
- # Handle content array
- content = msg_copy.get("content", [])
- if isinstance(content, str):
- anonymized_text, _ = await self._anonymize_text(content)
- msg_copy["content"] = anonymized_text
- elif isinstance(content, list):
- anonymized_content = []
- for item in content:
- if isinstance(item, dict) and item.get("type") == "text":
- text = item.get("text", "")
- anonymized_text, _ = await self._anonymize_text(text)
- item_copy = item.copy()
- item_copy["text"] = anonymized_text
- anonymized_content.append(item_copy)
- else:
- anonymized_content.append(item)
- msg_copy["content"] = anonymized_content
-
- return msg_copy
-
- async def _redact_image_content(self, message: Dict[str, Any]) -> Dict[str, Any]:
- """Redact PII from images in computer_call_output messages."""
- msg_copy = message.copy()
- output = msg_copy.get("output", {})
-
- if isinstance(output, dict) and "image_url" in output:
- try:
- # Extract base64 image data
- image_url = output["image_url"]
- if image_url.startswith("data:image/"):
- # Parse data URL
- header, data = image_url.split(",", 1)
- image_data = base64.b64decode(data)
-
- # Load image with PIL
- image = Image.open(io.BytesIO(image_data))
-
- # Redact PII from image
- redacted_image = self.image_redactor.redact(image, self.image_redaction_color)
-
- # Convert back to base64
- buffer = io.BytesIO()
- redacted_image.save(buffer, format="PNG")
- redacted_data = base64.b64encode(buffer.getvalue()).decode()
-
- # Update image URL
- output_copy = output.copy()
- output_copy["image_url"] = f"data:image/png;base64,{redacted_data}"
- msg_copy["output"] = output_copy
-
- except Exception as e:
- logger.warning(f"Failed to redact image: {e}")
-
- return msg_copy
+ # TODO: Implement _anonymize_message
+ return message
async def _deanonymize_item(self, item: Dict[str, Any]) -> Dict[str, Any]:
- """Deanonymize PII in tool calls and computer outputs."""
- item_copy = item.copy()
-
- # Handle computer_call arguments
- if item.get("type") == "computer_call":
- args = item_copy.get("args", {})
- if isinstance(args, dict):
- deanonymized_args = {}
- for key, value in args.items():
- if isinstance(value, str):
- deanonymized_value, _ = await self._deanonymize_text(value)
- deanonymized_args[key] = deanonymized_value
- else:
- deanonymized_args[key] = value
- item_copy["args"] = deanonymized_args
-
- return item_copy
-
- async def _anonymize_text(self, text: str) -> Tuple[str, List[RecognizerResult]]:
- """Anonymize PII in text and return the anonymized text and results."""
- if not text.strip():
- return text, []
-
- try:
- # Analyze text for PII
- analyzer_results = self.analyzer.analyze(
- text=text,
- entities=self.entities_to_anonymize,
- language="en"
- )
-
- if not analyzer_results:
- return text, []
-
- # Anonymize the text
- anonymized_result = self.anonymizer.anonymize(
- text=text,
- analyzer_results=analyzer_results,
- operators={entity_type: OperatorConfig(self.anonymization_operator)
- for entity_type in set(result.entity_type for result in analyzer_results)}
- )
-
- # Store mapping for deanonymization
- mapping_key = str(hash(text))
- self.anonymization_mappings[mapping_key] = {
- "original": text,
- "anonymized": anonymized_result.text,
- "results": analyzer_results
- }
-
- return anonymized_result.text, analyzer_results
-
- except Exception as e:
- logger.warning(f"Failed to anonymize text: {e}")
- return text, []
-
- async def _deanonymize_text(self, text: str) -> Tuple[str, bool]:
- """Attempt to deanonymize text using stored mappings."""
- try:
- # Look for matching anonymized text in mappings
- for mapping_key, mapping in self.anonymization_mappings.items():
- if mapping["anonymized"] == text:
- return mapping["original"], True
-
- # If no mapping found, return original text
- return text, False
-
- except Exception as e:
- logger.warning(f"Failed to deanonymize text: {e}")
- return text, False
+ # TODO: Implement _deanonymize_item
+ return item
diff --git a/libs/python/agent/agent/callbacks/trajectory_saver.py b/libs/python/agent/agent/callbacks/trajectory_saver.py
index b59563d5..805b535d 100644
--- a/libs/python/agent/agent/callbacks/trajectory_saver.py
+++ b/libs/python/agent/agent/callbacks/trajectory_saver.py
@@ -51,12 +51,14 @@ class TrajectorySaverCallback(AsyncCallbackHandler):
within the trajectory gets its own folder with screenshots and responses.
"""
- def __init__(self, trajectory_dir: str):
+ def __init__(self, trajectory_dir: str, reset_on_run: bool = True):
"""
Initialize trajectory saver.
Args:
trajectory_dir: Base directory to save trajectories
+ reset_on_run: If True, reset trajectory_id/turn/artifact on each run.
+ If False, continue using existing trajectory_id if set.
"""
self.trajectory_dir = Path(trajectory_dir)
self.trajectory_id: Optional[str] = None
@@ -64,6 +66,7 @@ class TrajectorySaverCallback(AsyncCallbackHandler):
self.current_artifact: int = 0
self.model: Optional[str] = None
self.total_usage: Dict[str, Any] = {}
+ self.reset_on_run = reset_on_run
# Ensure trajectory directory exists
self.trajectory_dir.mkdir(parents=True, exist_ok=True)
@@ -113,32 +116,38 @@ class TrajectorySaverCallback(AsyncCallbackHandler):
async def on_run_start(self, kwargs: Dict[str, Any], old_items: List[Dict[str, Any]]) -> None:
"""Initialize trajectory tracking for a new run."""
model = kwargs.get("model", "unknown")
- model_name_short = model.split("+")[-1].split("/")[-1].lower()[:16]
- if "+" in model:
- model_name_short = model.split("+")[0].lower()[:4] + "_" + model_name_short
+
+ # Only reset trajectory state if reset_on_run is True or no trajectory exists
+ if self.reset_on_run or not self.trajectory_id:
+ model_name_short = model.split("+")[-1].split("/")[-1].lower()[:16]
+ if "+" in model:
+ model_name_short = model.split("+")[0].lower()[:4] + "_" + model_name_short
- # id format: yyyy-mm-dd_model_hhmmss_uuid[:4]
- now = datetime.now()
- self.trajectory_id = f"{now.strftime('%Y-%m-%d')}_{model_name_short}_{now.strftime('%H%M%S')}_{str(uuid.uuid4())[:4]}"
- self.current_turn = 0
- self.current_artifact = 0
- self.model = model
- self.total_usage = {}
-
- # Create trajectory directory
- trajectory_path = self.trajectory_dir / self.trajectory_id
- trajectory_path.mkdir(parents=True, exist_ok=True)
-
- # Save trajectory metadata
- metadata = {
- "trajectory_id": self.trajectory_id,
- "created_at": str(uuid.uuid1().time),
- "status": "running",
- "kwargs": kwargs,
- }
-
- with open(trajectory_path / "metadata.json", "w") as f:
- json.dump(metadata, f, indent=2)
+ # id format: yyyy-mm-dd_model_hhmmss_uuid[:4]
+ now = datetime.now()
+ self.trajectory_id = f"{now.strftime('%Y-%m-%d')}_{model_name_short}_{now.strftime('%H%M%S')}_{str(uuid.uuid4())[:4]}"
+ self.current_turn = 0
+ self.current_artifact = 0
+ self.model = model
+ self.total_usage = {}
+
+ # Create trajectory directory
+ trajectory_path = self.trajectory_dir / self.trajectory_id
+ trajectory_path.mkdir(parents=True, exist_ok=True)
+
+ # Save trajectory metadata
+ metadata = {
+ "trajectory_id": self.trajectory_id,
+ "created_at": str(uuid.uuid1().time),
+ "status": "running",
+ "kwargs": kwargs,
+ }
+
+ with open(trajectory_path / "metadata.json", "w") as f:
+ json.dump(metadata, f, indent=2)
+ else:
+ # Continue with existing trajectory - just update model if needed
+ self.model = model
@override
async def on_run_end(self, kwargs: Dict[str, Any], old_items: List[Dict[str, Any]], new_items: List[Dict[str, Any]]) -> None:
diff --git a/libs/python/agent/agent/cli.py b/libs/python/agent/agent/cli.py
index b5d97337..4d17ca15 100644
--- a/libs/python/agent/agent/cli.py
+++ b/libs/python/agent/agent/cli.py
@@ -94,14 +94,14 @@ def print_action(action_type: str, details: Dict[str, Any], total_cost: float):
# Format action details
args_str = ""
if action_type == "click" and "x" in details and "y" in details:
- args_str = f"({details['x']}, {details['y']})"
+ args_str = f"_{details['button']}({details['x']}, {details['y']})"
elif action_type == "type" and "text" in details:
text = details["text"]
if len(text) > 50:
text = text[:47] + "..."
- args_str = f'"{text}"'
- elif action_type == "key" and "key" in details:
- args_str = f"'{details['key']}'"
+ args_str = f'("{text}")'
+ elif action_type == "key" and "text" in details:
+ args_str = f"('{details['text']}')"
elif action_type == "scroll" and "x" in details and "y" in details:
args_str = f"({details['x']}, {details['y']})"
@@ -120,7 +120,7 @@ async def ainput(prompt: str = ""):
async def chat_loop(agent, model: str, container_name: str, initial_prompt: str = "", show_usage: bool = True):
"""Main chat loop with the agent."""
- print_welcome(model, agent.agent_loop.__name__, container_name)
+ print_welcome(model, agent.agent_config_info.agent_class.__name__, container_name)
history = []
@@ -130,7 +130,7 @@ async def chat_loop(agent, model: str, container_name: str, initial_prompt: str
total_cost = 0
while True:
- if history[-1].get("role") != "user":
+ if len(history) == 0 or history[-1].get("role") != "user":
# Get user input with prompt
print_colored("> ", end="")
user_input = await ainput()
@@ -260,7 +260,12 @@ Examples:
help="Show total cost of the agent runs"
)
-
+ parser.add_argument(
+ "-r", "--max-retries",
+ type=int,
+ default=3,
+ help="Maximum number of retries for the LLM API calls"
+ )
args = parser.parse_args()
@@ -327,6 +332,7 @@ Examples:
"model": args.model,
"tools": [computer],
"verbosity": 20 if args.verbose else 30, # DEBUG vs WARNING
+ "max_retries": args.max_retries
}
if args.images > 0:
diff --git a/libs/python/agent/agent/computers/__init__.py b/libs/python/agent/agent/computers/__init__.py
new file mode 100644
index 00000000..7c7194b6
--- /dev/null
+++ b/libs/python/agent/agent/computers/__init__.py
@@ -0,0 +1,41 @@
+"""
+Computer handler factory and interface definitions.
+
+This module provides a factory function to create computer handlers from different
+computer interface types, supporting both the ComputerHandler protocol and the
+Computer library interface.
+"""
+
+from .base import AsyncComputerHandler
+from .cua import cuaComputerHandler
+from .custom import CustomComputerHandler
+from computer import Computer as cuaComputer
+
+def is_agent_computer(computer):
+ """Check if the given computer is a ComputerHandler or CUA Computer."""
+ return isinstance(computer, AsyncComputerHandler) or \
+ isinstance(computer, cuaComputer) or \
+ (isinstance(computer, dict)) #and "screenshot" in computer)
+
+async def make_computer_handler(computer):
+ """
+ Create a computer handler from a computer interface.
+
+ Args:
+ computer: Either a ComputerHandler instance, Computer instance, or dict of functions
+
+ Returns:
+ ComputerHandler: A computer handler instance
+
+ Raises:
+ ValueError: If the computer type is not supported
+ """
+ if isinstance(computer, AsyncComputerHandler):
+ return computer
+ if isinstance(computer, cuaComputer):
+ computer_handler = cuaComputerHandler(computer)
+ await computer_handler._initialize()
+ return computer_handler
+ if isinstance(computer, dict):
+ return CustomComputerHandler(computer)
+ raise ValueError(f"Unsupported computer type: {type(computer)}")
\ No newline at end of file
diff --git a/libs/python/agent/agent/computers/base.py b/libs/python/agent/agent/computers/base.py
new file mode 100644
index 00000000..7fbcb0f7
--- /dev/null
+++ b/libs/python/agent/agent/computers/base.py
@@ -0,0 +1,70 @@
+"""
+Base computer interface protocol for agent interactions.
+"""
+
+from typing import Protocol, Literal, List, Dict, Any, Union, Optional, runtime_checkable
+
+
+@runtime_checkable
+class AsyncComputerHandler(Protocol):
+ """Protocol defining the interface for computer interactions."""
+
+ # ==== Computer-Use-Preview Action Space ====
+
+ async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
+ """Get the current environment type."""
+ ...
+
+ async def get_dimensions(self) -> tuple[int, int]:
+ """Get screen dimensions as (width, height)."""
+ ...
+
+ async def screenshot(self) -> str:
+ """Take a screenshot and return as base64 string."""
+ ...
+
+ async def click(self, x: int, y: int, button: str = "left") -> None:
+ """Click at coordinates with specified button."""
+ ...
+
+ async def double_click(self, x: int, y: int) -> None:
+ """Double click at coordinates."""
+ ...
+
+ async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
+ """Scroll at coordinates with specified scroll amounts."""
+ ...
+
+ async def type(self, text: str) -> None:
+ """Type text."""
+ ...
+
+ async def wait(self, ms: int = 1000) -> None:
+ """Wait for specified milliseconds."""
+ ...
+
+ async def move(self, x: int, y: int) -> None:
+ """Move cursor to coordinates."""
+ ...
+
+ async def keypress(self, keys: Union[List[str], str]) -> None:
+ """Press key combination."""
+ ...
+
+ async def drag(self, path: List[Dict[str, int]]) -> None:
+ """Drag along specified path."""
+ ...
+
+ async def get_current_url(self) -> str:
+ """Get current URL (for browser environments)."""
+ ...
+
+ # ==== Anthropic Action Space ====
+
+ async def left_mouse_down(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse down at coordinates."""
+ ...
+
+ async def left_mouse_up(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse up at coordinates."""
+ ...
diff --git a/libs/python/agent/agent/computer_handler.py b/libs/python/agent/agent/computers/cua.py
similarity index 64%
rename from libs/python/agent/agent/computer_handler.py
rename to libs/python/agent/agent/computers/cua.py
index 4a9f0186..f935be5b 100644
--- a/libs/python/agent/agent/computer_handler.py
+++ b/libs/python/agent/agent/computers/cua.py
@@ -3,34 +3,45 @@ Computer handler implementation for OpenAI computer-use-preview protocol.
"""
import base64
-from typing import Dict, List, Any, Literal
-from .types import Computer
+from typing import Dict, List, Any, Literal, Union, Optional
+from .base import AsyncComputerHandler
+from computer import Computer
-
-class OpenAIComputerHandler:
+class cuaComputerHandler(AsyncComputerHandler):
"""Computer handler that implements the Computer protocol using the computer interface."""
- def __init__(self, computer_interface):
+ def __init__(self, cua_computer: Computer):
"""Initialize with a computer interface (from tool schema)."""
- self.interface = computer_interface
+ self.cua_computer = cua_computer
+ self.interface = None
+
+ async def _initialize(self):
+ if hasattr(self.cua_computer, '_initialized') and not self.cua_computer._initialized:
+ await self.cua_computer.run()
+ self.interface = self.cua_computer.interface
+ # ==== Computer-Use-Preview Action Space ====
+
async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
"""Get the current environment type."""
- # For now, return a default - this could be enhanced to detect actual environment
- return "windows"
-
+ # TODO: detect actual environment
+ return "linux"
+
async def get_dimensions(self) -> tuple[int, int]:
"""Get screen dimensions as (width, height)."""
+ assert self.interface is not None
screen_size = await self.interface.get_screen_size()
return screen_size["width"], screen_size["height"]
async def screenshot(self) -> str:
"""Take a screenshot and return as base64 string."""
+ assert self.interface is not None
screenshot_bytes = await self.interface.screenshot()
return base64.b64encode(screenshot_bytes).decode('utf-8')
async def click(self, x: int, y: int, button: str = "left") -> None:
"""Click at coordinates with specified button."""
+ assert self.interface is not None
if button == "left":
await self.interface.left_click(x, y)
elif button == "right":
@@ -41,28 +52,36 @@ class OpenAIComputerHandler:
async def double_click(self, x: int, y: int) -> None:
"""Double click at coordinates."""
+ assert self.interface is not None
await self.interface.double_click(x, y)
async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
"""Scroll at coordinates with specified scroll amounts."""
+ assert self.interface is not None
await self.interface.move_cursor(x, y)
await self.interface.scroll(scroll_x, scroll_y)
async def type(self, text: str) -> None:
"""Type text."""
+ assert self.interface is not None
await self.interface.type_text(text)
async def wait(self, ms: int = 1000) -> None:
"""Wait for specified milliseconds."""
+ assert self.interface is not None
import asyncio
await asyncio.sleep(ms / 1000.0)
async def move(self, x: int, y: int) -> None:
"""Move cursor to coordinates."""
+ assert self.interface is not None
await self.interface.move_cursor(x, y)
- async def keypress(self, keys: List[str]) -> None:
+ async def keypress(self, keys: Union[List[str], str]) -> None:
"""Press key combination."""
+ assert self.interface is not None
+ if isinstance(keys, str):
+ keys = keys.replace("-", "+").split("+")
if len(keys) == 1:
await self.interface.press_key(keys[0])
else:
@@ -71,6 +90,7 @@ class OpenAIComputerHandler:
async def drag(self, path: List[Dict[str, int]]) -> None:
"""Drag along specified path."""
+ assert self.interface is not None
if not path:
return
@@ -92,16 +112,13 @@ class OpenAIComputerHandler:
# For now, return empty string
return ""
-
-def acknowledge_safety_check_callback(message: str) -> bool:
- """Safety check callback for user acknowledgment."""
- response = input(
- f"Safety Check Warning: {message}\nDo you want to acknowledge and proceed? (y/n): "
- ).lower()
- return response.strip() == "y"
-
-
-def check_blocklisted_url(url: str) -> None:
- """Check if URL is blocklisted (placeholder implementation)."""
- # This would contain actual URL checking logic
- pass
+ # ==== Anthropic Computer Action Space ====
+ async def left_mouse_down(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse down at coordinates."""
+ assert self.interface is not None
+ await self.interface.mouse_down(x, y, button="left")
+
+ async def left_mouse_up(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse up at coordinates."""
+ assert self.interface is not None
+ await self.interface.mouse_up(x, y, button="left")
\ No newline at end of file
diff --git a/libs/python/agent/agent/computers/custom.py b/libs/python/agent/agent/computers/custom.py
new file mode 100644
index 00000000..b5f801b6
--- /dev/null
+++ b/libs/python/agent/agent/computers/custom.py
@@ -0,0 +1,209 @@
+"""
+Custom computer handler implementation that accepts a dictionary of functions.
+"""
+
+import base64
+from typing import Dict, List, Any, Literal, Union, Optional, Callable
+from PIL import Image
+import io
+from .base import AsyncComputerHandler
+
+
+class CustomComputerHandler(AsyncComputerHandler):
+ """Computer handler that implements the Computer protocol using a dictionary of custom functions."""
+
+ def __init__(self, functions: Dict[str, Callable]):
+ """
+ Initialize with a dictionary of functions.
+
+ Args:
+ functions: Dictionary where keys are method names and values are callable functions.
+ Only 'screenshot' is required, all others are optional.
+
+ Raises:
+ ValueError: If required 'screenshot' function is not provided.
+ """
+ if 'screenshot' not in functions:
+ raise ValueError("'screenshot' function is required in functions dictionary")
+
+ self.functions = functions
+ self._last_screenshot_size: Optional[tuple[int, int]] = None
+
+ async def _call_function(self, func, *args, **kwargs):
+ """
+ Call a function, handling both async and sync functions.
+
+ Args:
+ func: The function to call
+ *args: Positional arguments to pass to the function
+ **kwargs: Keyword arguments to pass to the function
+
+ Returns:
+ The result of the function call
+ """
+ import asyncio
+ import inspect
+
+ if callable(func):
+ if inspect.iscoroutinefunction(func):
+ return await func(*args, **kwargs)
+ else:
+ return func(*args, **kwargs)
+ else:
+ return func
+
+ async def _get_value(self, attribute: str):
+ """
+ Get value for an attribute, checking both 'get_{attribute}' and '{attribute}' keys.
+
+ Args:
+ attribute: The attribute name to look for
+
+ Returns:
+ The value from the functions dict, called if callable, returned directly if not
+ """
+ # Check for 'get_{attribute}' first
+ get_key = f"get_{attribute}"
+ if get_key in self.functions:
+ return await self._call_function(self.functions[get_key])
+
+ # Check for '{attribute}'
+ if attribute in self.functions:
+ return await self._call_function(self.functions[attribute])
+
+ return None
+
+ def _to_b64_str(self, img: Union[bytes, Image.Image, str]) -> str:
+ """
+ Convert image to base64 string.
+
+ Args:
+ img: Image as bytes, PIL Image, or base64 string
+
+ Returns:
+ str: Base64 encoded image string
+ """
+ if isinstance(img, str):
+ # Already a base64 string
+ return img
+ elif isinstance(img, bytes):
+ # Raw bytes
+ return base64.b64encode(img).decode('utf-8')
+ elif isinstance(img, Image.Image):
+ # PIL Image
+ buffer = io.BytesIO()
+ img.save(buffer, format='PNG')
+ return base64.b64encode(buffer.getvalue()).decode('utf-8')
+ else:
+ raise ValueError(f"Unsupported image type: {type(img)}")
+
+ # ==== Computer-Use-Preview Action Space ====
+
+ async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
+ """Get the current environment type."""
+ result = await self._get_value('environment')
+ if result is None:
+ return "linux"
+ assert result in ["windows", "mac", "linux", "browser"]
+ return result # type: ignore
+
+ async def get_dimensions(self) -> tuple[int, int]:
+ """Get screen dimensions as (width, height)."""
+ result = await self._get_value('dimensions')
+ if result is not None:
+ return result # type: ignore
+
+ # Fallback: use last screenshot size if available
+ if not self._last_screenshot_size:
+ await self.screenshot()
+ assert self._last_screenshot_size is not None, "Failed to get screenshot size"
+
+ return self._last_screenshot_size
+
+ async def screenshot(self) -> str:
+ """Take a screenshot and return as base64 string."""
+ result = await self._call_function(self.functions['screenshot'])
+ b64_str = self._to_b64_str(result) # type: ignore
+
+ # Try to extract dimensions for fallback use
+ try:
+ if isinstance(result, Image.Image):
+ self._last_screenshot_size = result.size
+ elif isinstance(result, bytes):
+ # Try to decode bytes to get dimensions
+ img = Image.open(io.BytesIO(result))
+ self._last_screenshot_size = img.size
+ except Exception:
+ # If we can't get dimensions, that's okay
+ pass
+
+ return b64_str
+
+ async def click(self, x: int, y: int, button: str = "left") -> None:
+ """Click at coordinates with specified button."""
+ if 'click' in self.functions:
+ await self._call_function(self.functions['click'], x, y, button)
+ # No-op if not implemented
+
+ async def double_click(self, x: int, y: int) -> None:
+ """Double click at coordinates."""
+ if 'double_click' in self.functions:
+ await self._call_function(self.functions['double_click'], x, y)
+ # No-op if not implemented
+
+ async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
+ """Scroll at coordinates with specified scroll amounts."""
+ if 'scroll' in self.functions:
+ await self._call_function(self.functions['scroll'], x, y, scroll_x, scroll_y)
+ # No-op if not implemented
+
+ async def type(self, text: str) -> None:
+ """Type text."""
+ if 'type' in self.functions:
+ await self._call_function(self.functions['type'], text)
+ # No-op if not implemented
+
+ async def wait(self, ms: int = 1000) -> None:
+ """Wait for specified milliseconds."""
+ if 'wait' in self.functions:
+ await self._call_function(self.functions['wait'], ms)
+ else:
+ # Default implementation
+ import asyncio
+ await asyncio.sleep(ms / 1000.0)
+
+ async def move(self, x: int, y: int) -> None:
+ """Move cursor to coordinates."""
+ if 'move' in self.functions:
+ await self._call_function(self.functions['move'], x, y)
+ # No-op if not implemented
+
+ async def keypress(self, keys: Union[List[str], str]) -> None:
+ """Press key combination."""
+ if 'keypress' in self.functions:
+ await self._call_function(self.functions['keypress'], keys)
+ # No-op if not implemented
+
+ async def drag(self, path: List[Dict[str, int]]) -> None:
+ """Drag along specified path."""
+ if 'drag' in self.functions:
+ await self._call_function(self.functions['drag'], path)
+ # No-op if not implemented
+
+ async def get_current_url(self) -> str:
+ """Get current URL (for browser environments)."""
+ if 'get_current_url' in self.functions:
+ return await self._get_value('current_url') # type: ignore
+ return "" # Default fallback
+
+ async def left_mouse_down(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse down at coordinates."""
+ if 'left_mouse_down' in self.functions:
+ await self._call_function(self.functions['left_mouse_down'], x, y)
+ # No-op if not implemented
+
+ async def left_mouse_up(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse up at coordinates."""
+ if 'left_mouse_up' in self.functions:
+ await self._call_function(self.functions['left_mouse_up'], x, y)
+ # No-op if not implemented
diff --git a/libs/python/agent/agent/decorators.py b/libs/python/agent/agent/decorators.py
index 0b31c25a..7fba0443 100644
--- a/libs/python/agent/agent/decorators.py
+++ b/libs/python/agent/agent/decorators.py
@@ -2,89 +2,51 @@
Decorators for agent - agent_loop decorator
"""
-import asyncio
-import inspect
-from typing import Dict, List, Any, Callable, Optional
-from functools import wraps
-
-from .types import AgentLoopInfo
+from typing import List, Optional
+from .types import AgentConfigInfo
# Global registry
-_agent_loops: List[AgentLoopInfo] = []
+_agent_configs: List[AgentConfigInfo] = []
-def agent_loop(models: str, priority: int = 0):
+def register_agent(models: str, priority: int = 0):
"""
- Decorator to register an agent loop function.
+ Decorator to register an AsyncAgentConfig class.
Args:
models: Regex pattern to match supported models
- priority: Priority for loop selection (higher = more priority)
+ priority: Priority for agent selection (higher = more priority)
"""
- def decorator(func: Callable):
- # Validate function signature
- sig = inspect.signature(func)
- required_params = {'messages', 'model'}
- func_params = set(sig.parameters.keys())
+ def decorator(agent_class: type):
+ # Validate that the class implements AsyncAgentConfig protocol
+ if not hasattr(agent_class, 'predict_step'):
+ raise ValueError(f"Agent class {agent_class.__name__} must implement predict_step method")
+ if not hasattr(agent_class, 'predict_click'):
+ raise ValueError(f"Agent class {agent_class.__name__} must implement predict_click method")
+ if not hasattr(agent_class, 'get_capabilities'):
+ raise ValueError(f"Agent class {agent_class.__name__} must implement get_capabilities method")
- if not required_params.issubset(func_params):
- missing = required_params - func_params
- raise ValueError(f"Agent loop function must have parameters: {missing}")
-
- # Register the loop
- loop_info = AgentLoopInfo(
- func=func,
+ # Register the agent config
+ config_info = AgentConfigInfo(
+ agent_class=agent_class,
models_regex=models,
priority=priority
)
- _agent_loops.append(loop_info)
+ _agent_configs.append(config_info)
# Sort by priority (highest first)
- _agent_loops.sort(key=lambda x: x.priority, reverse=True)
+ _agent_configs.sort(key=lambda x: x.priority, reverse=True)
- @wraps(func)
- async def wrapper(*args, **kwargs):
- # Wrap the function in an asyncio.Queue for cancellation support
- queue = asyncio.Queue()
- task = None
-
- try:
- # Create a task that can be cancelled
- async def run_loop():
- try:
- result = await func(*args, **kwargs)
- await queue.put(('result', result))
- except Exception as e:
- await queue.put(('error', e))
-
- task = asyncio.create_task(run_loop())
-
- # Wait for result or cancellation
- event_type, data = await queue.get()
-
- if event_type == 'error':
- raise data
- return data
-
- except asyncio.CancelledError:
- if task:
- task.cancel()
- try:
- await task
- except asyncio.CancelledError:
- pass
- raise
-
- return wrapper
+ return agent_class
return decorator
-def get_agent_loops() -> List[AgentLoopInfo]:
- """Get all registered agent loops"""
- return _agent_loops.copy()
+def get_agent_configs() -> List[AgentConfigInfo]:
+ """Get all registered agent configs"""
+ return _agent_configs.copy()
-def find_agent_loop(model: str) -> Optional[AgentLoopInfo]:
- """Find the best matching agent loop for a model"""
- for loop_info in _agent_loops:
- if loop_info.matches_model(model):
- return loop_info
+def find_agent_config(model: str) -> Optional[AgentConfigInfo]:
+ """Find the best matching agent config for a model"""
+ for config_info in _agent_configs:
+ if config_info.matches_model(model):
+ return config_info
return None
diff --git a/libs/python/agent/agent/human_tool/__init__.py b/libs/python/agent/agent/human_tool/__init__.py
new file mode 100644
index 00000000..f57fb305
--- /dev/null
+++ b/libs/python/agent/agent/human_tool/__init__.py
@@ -0,0 +1,29 @@
+"""
+Human-in-the-Loop Completion Tool
+
+This package provides a human-in-the-loop completion system that allows
+AI agents to request human assistance for complex decisions or responses.
+
+Components:
+- server.py: FastAPI server with completion queue management
+- ui.py: Gradio UI for human interaction
+- __main__.py: Combined server and UI application
+
+Usage:
+ # Run the server and UI
+ python -m agent.human_tool
+
+ # Or run components separately
+ python -m agent.human_tool.server # API server only
+ python -m agent.human_tool.ui # UI only
+"""
+
+from .server import CompletionQueue, completion_queue
+from .ui import HumanCompletionUI, create_ui
+
+__all__ = [
+ "CompletionQueue",
+ "completion_queue",
+ "HumanCompletionUI",
+ "create_ui"
+]
diff --git a/libs/python/agent/agent/human_tool/__main__.py b/libs/python/agent/agent/human_tool/__main__.py
new file mode 100644
index 00000000..e1ceed50
--- /dev/null
+++ b/libs/python/agent/agent/human_tool/__main__.py
@@ -0,0 +1,38 @@
+#!/usr/bin/env python3
+"""
+Human-in-the-Loop Completion Server and UI
+
+This module combines the FastAPI server for handling completion requests
+with a Gradio UI for human interaction.
+"""
+
+import gradio as gr
+from fastapi import FastAPI
+from .server import app as fastapi_app
+from .ui import create_ui
+
+# Create the Gradio demo
+gradio_demo = create_ui()
+
+# Mount Gradio on FastAPI
+CUSTOM_PATH = "/gradio"
+app = gr.mount_gradio_app(fastapi_app, gradio_demo, path=CUSTOM_PATH)
+
+# Add a redirect from root to Gradio UI
+@fastapi_app.get("/")
+async def redirect_to_ui():
+ """Redirect root to Gradio UI."""
+ return {
+ "message": "Human Completion Server is running",
+ "ui_url": "/gradio",
+ "api_docs": "/docs"
+ }
+
+if __name__ == "__main__":
+ import uvicorn
+ print("🚀 Starting Human-in-the-Loop Completion Server...")
+ print("📊 API Server: http://localhost:8002")
+ print("🎨 Gradio UI: http://localhost:8002/gradio")
+ print("📚 API Docs: http://localhost:8002/docs")
+
+ uvicorn.run(app, host="0.0.0.0", port=8002)
diff --git a/libs/python/agent/agent/human_tool/server.py b/libs/python/agent/agent/human_tool/server.py
new file mode 100644
index 00000000..c5d08cfe
--- /dev/null
+++ b/libs/python/agent/agent/human_tool/server.py
@@ -0,0 +1,234 @@
+import asyncio
+import uuid
+from datetime import datetime
+from typing import Dict, List, Any, Optional
+from dataclasses import dataclass, asdict
+from enum import Enum
+
+from fastapi import FastAPI, HTTPException
+from pydantic import BaseModel
+
+
+class CompletionStatus(str, Enum):
+ PENDING = "pending"
+ COMPLETED = "completed"
+ FAILED = "failed"
+
+
+@dataclass
+class CompletionCall:
+ id: str
+ messages: List[Dict[str, Any]]
+ model: str
+ status: CompletionStatus
+ created_at: datetime
+ completed_at: Optional[datetime] = None
+ response: Optional[str] = None
+ tool_calls: Optional[List[Dict[str, Any]]] = None
+ error: Optional[str] = None
+
+
+class ToolCall(BaseModel):
+ id: str
+ type: str = "function"
+ function: Dict[str, Any]
+
+
+class CompletionRequest(BaseModel):
+ messages: List[Dict[str, Any]]
+ model: str
+
+
+class CompletionResponse(BaseModel):
+ response: Optional[str] = None
+ tool_calls: Optional[List[Dict[str, Any]]] = None
+
+
+class CompletionQueue:
+ def __init__(self):
+ self._queue: Dict[str, CompletionCall] = {}
+ self._pending_order: List[str] = []
+ self._lock = asyncio.Lock()
+
+ async def add_completion(self, messages: List[Dict[str, Any]], model: str) -> str:
+ """Add a completion call to the queue."""
+ async with self._lock:
+ call_id = str(uuid.uuid4())
+ completion_call = CompletionCall(
+ id=call_id,
+ messages=messages,
+ model=model,
+ status=CompletionStatus.PENDING,
+ created_at=datetime.now()
+ )
+ self._queue[call_id] = completion_call
+ self._pending_order.append(call_id)
+ return call_id
+
+ async def get_pending_calls(self) -> List[Dict[str, Any]]:
+ """Get all pending completion calls."""
+ async with self._lock:
+ pending_calls = []
+ for call_id in self._pending_order:
+ if call_id in self._queue and self._queue[call_id].status == CompletionStatus.PENDING:
+ call = self._queue[call_id]
+ pending_calls.append({
+ "id": call.id,
+ "model": call.model,
+ "created_at": call.created_at.isoformat(),
+ "messages": call.messages
+ })
+ return pending_calls
+
+ async def get_call_status(self, call_id: str) -> Optional[Dict[str, Any]]:
+ """Get the status of a specific completion call."""
+ async with self._lock:
+ if call_id not in self._queue:
+ return None
+
+ call = self._queue[call_id]
+ result = {
+ "id": call.id,
+ "status": call.status.value,
+ "created_at": call.created_at.isoformat(),
+ "model": call.model,
+ "messages": call.messages
+ }
+
+ if call.completed_at:
+ result["completed_at"] = call.completed_at.isoformat()
+ if call.response:
+ result["response"] = call.response
+ if call.tool_calls:
+ result["tool_calls"] = call.tool_calls
+ if call.error:
+ result["error"] = call.error
+
+ return result
+
+ async def complete_call(self, call_id: str, response: Optional[str] = None, tool_calls: Optional[List[Dict[str, Any]]] = None) -> bool:
+ """Mark a completion call as completed with a response or tool calls."""
+ async with self._lock:
+ if call_id not in self._queue:
+ return False
+
+ call = self._queue[call_id]
+ if call.status != CompletionStatus.PENDING:
+ return False
+
+ call.status = CompletionStatus.COMPLETED
+ call.completed_at = datetime.now()
+ call.response = response
+ call.tool_calls = tool_calls
+
+ # Remove from pending order
+ if call_id in self._pending_order:
+ self._pending_order.remove(call_id)
+
+ return True
+
+ async def fail_call(self, call_id: str, error: str) -> bool:
+ """Mark a completion call as failed with an error."""
+ async with self._lock:
+ if call_id not in self._queue:
+ return False
+
+ call = self._queue[call_id]
+ if call.status != CompletionStatus.PENDING:
+ return False
+
+ call.status = CompletionStatus.FAILED
+ call.completed_at = datetime.now()
+ call.error = error
+
+ # Remove from pending order
+ if call_id in self._pending_order:
+ self._pending_order.remove(call_id)
+
+ return True
+
+ async def wait_for_completion(self, call_id: str, timeout: float = 300.0) -> Optional[str]:
+ """Wait for a completion call to be completed and return the response."""
+ start_time = asyncio.get_event_loop().time()
+
+ while True:
+ status = await self.get_call_status(call_id)
+ if not status:
+ return None
+
+ if status["status"] == CompletionStatus.COMPLETED.value:
+ return status.get("response")
+ elif status["status"] == CompletionStatus.FAILED.value:
+ raise Exception(f"Completion failed: {status.get('error', 'Unknown error')}")
+
+ # Check timeout
+ if asyncio.get_event_loop().time() - start_time > timeout:
+ await self.fail_call(call_id, "Timeout waiting for human response")
+ raise TimeoutError("Timeout waiting for human response")
+
+ # Wait a bit before checking again
+ await asyncio.sleep(0.5)
+
+
+# Global queue instance
+completion_queue = CompletionQueue()
+
+# FastAPI app
+app = FastAPI(title="Human Completion Server", version="1.0.0")
+
+
+@app.post("/queue", response_model=Dict[str, str])
+async def queue_completion(request: CompletionRequest):
+ """Add a completion request to the queue."""
+ call_id = await completion_queue.add_completion(request.messages, request.model)
+ return {"id": call_id, "status": "queued"}
+
+
+@app.get("/pending")
+async def list_pending():
+ """List all pending completion calls."""
+ pending_calls = await completion_queue.get_pending_calls()
+ return {"pending_calls": pending_calls}
+
+
+@app.get("/status/{call_id}")
+async def get_status(call_id: str):
+ """Get the status of a specific completion call."""
+ status = await completion_queue.get_call_status(call_id)
+ if not status:
+ raise HTTPException(status_code=404, detail="Completion call not found")
+ return status
+
+
+@app.post("/complete/{call_id}")
+async def complete_call(call_id: str, response: CompletionResponse):
+ """Complete a call with a human response."""
+ success = await completion_queue.complete_call(
+ call_id,
+ response=response.response,
+ tool_calls=response.tool_calls
+ )
+ if success:
+ return {"status": "success", "message": "Call completed"}
+ else:
+ raise HTTPException(status_code=404, detail="Call not found or already completed")
+
+
+@app.post("/fail/{call_id}")
+async def fail_call(call_id: str, error: Dict[str, str]):
+ """Mark a call as failed."""
+ success = await completion_queue.fail_call(call_id, error.get("error", "Unknown error"))
+ if not success:
+ raise HTTPException(status_code=404, detail="Completion call not found or already completed")
+ return {"status": "failed"}
+
+
+@app.get("/")
+async def root():
+ """Root endpoint."""
+ return {"message": "Human Completion Server is running"}
+
+
+if __name__ == "__main__":
+ import uvicorn
+ uvicorn.run(app, host="0.0.0.0", port=8002)
diff --git a/libs/python/agent/agent/human_tool/ui.py b/libs/python/agent/agent/human_tool/ui.py
new file mode 100644
index 00000000..f4a9fb4f
--- /dev/null
+++ b/libs/python/agent/agent/human_tool/ui.py
@@ -0,0 +1,630 @@
+import gradio as gr
+import json
+import time
+from typing import List, Dict, Any, Optional
+from datetime import datetime
+import requests
+from .server import completion_queue
+import base64
+import io
+from PIL import Image
+
+class HumanCompletionUI:
+ def __init__(self, server_url: str = "http://localhost:8002"):
+ self.server_url = server_url
+ self.current_call_id: Optional[str] = None
+ self.refresh_interval = 2.0 # seconds
+ self.last_image = None # Store the last image for display
+
+ def format_messages_for_chatbot(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
+ """Format messages for display in gr.Chatbot with type='messages'."""
+ formatted = []
+ for msg in messages:
+ role = msg.get("role", "user")
+ content = msg.get("content", "")
+ tool_calls = msg.get("tool_calls", [])
+
+ # Handle different content formats
+ if isinstance(content, list):
+ # Multi-modal content - can include text and images
+ formatted_content = []
+ for item in content:
+ if item.get("type") == "text":
+ text = item.get("text", "")
+ if text.strip(): # Only add non-empty text
+ formatted_content.append(text)
+ elif item.get("type") == "image_url":
+ image_url = item.get("image_url", {}).get("url", "")
+ if image_url:
+ # Check if it's a base64 image or URL
+ if image_url.startswith("data:image"):
+ # For base64 images, decode and create gr.Image
+ try:
+ header, data = image_url.split(",", 1)
+ image_data = base64.b64decode(data)
+ image = Image.open(io.BytesIO(image_data))
+ formatted_content.append(gr.Image(value=image))
+ except Exception as e:
+ print(f"Error loading image: {e}")
+ formatted_content.append(f"[Image loading error: {e}]")
+ else:
+ # For URL images, create gr.Image with URL
+ formatted_content.append(gr.Image(value=image_url))
+
+ # Determine final content format
+ if len(formatted_content) == 1:
+ content = formatted_content[0]
+ elif len(formatted_content) > 1:
+ content = formatted_content
+ else:
+ content = "[Empty content]"
+
+ # Ensure role is valid for Gradio Chatbot
+ if role not in ["user", "assistant"]:
+ role = "assistant" if role == "system" else "user"
+
+ # Invert roles for better display in human UI context
+ # (what the AI says becomes "user", what human should respond becomes "assistant")
+ if role == "user":
+ role = "assistant"
+ else:
+ role = "user"
+
+ # Add the main message if it has content
+ if content and str(content).strip():
+ formatted.append({"role": role, "content": content})
+
+ # Handle tool calls - create separate messages for each tool call
+ if tool_calls:
+ for tool_call in tool_calls:
+ function_name = tool_call.get("function", {}).get("name", "unknown")
+ arguments_str = tool_call.get("function", {}).get("arguments", "{}")
+
+ try:
+ # Parse arguments to format them nicely
+ arguments = json.loads(arguments_str)
+ formatted_args = json.dumps(arguments, indent=2)
+ except json.JSONDecodeError:
+ # If parsing fails, use the raw string
+ formatted_args = arguments_str
+
+ # Create a formatted message for the tool call
+ tool_call_content = f"```json\n{formatted_args}\n```"
+
+ formatted.append({
+ "role": role,
+ "content": tool_call_content,
+ "metadata": {"title": f"🛠️ Used {function_name}"}
+ })
+
+ return formatted
+
+ def get_pending_calls(self) -> List[Dict[str, Any]]:
+ """Get pending calls from the server."""
+ try:
+ response = requests.get(f"{self.server_url}/pending", timeout=5)
+ if response.status_code == 200:
+ return response.json().get("pending_calls", [])
+ except Exception as e:
+ print(f"Error fetching pending calls: {e}")
+ return []
+
+ def complete_call_with_response(self, call_id: str, response: str) -> bool:
+ """Complete a call with a text response."""
+ try:
+ response_data = {"response": response}
+ response_obj = requests.post(
+ f"{self.server_url}/complete/{call_id}",
+ json=response_data,
+ timeout=10
+ )
+ response_obj.raise_for_status()
+ return True
+ except requests.RequestException as e:
+ print(f"Error completing call: {e}")
+ return False
+
+ def complete_call_with_tool_calls(self, call_id: str, tool_calls: List[Dict[str, Any]]) -> bool:
+ """Complete a call with tool calls."""
+ try:
+ response_data = {"tool_calls": tool_calls}
+ response_obj = requests.post(
+ f"{self.server_url}/complete/{call_id}",
+ json=response_data,
+ timeout=10
+ )
+ response_obj.raise_for_status()
+ return True
+ except requests.RequestException as e:
+ print(f"Error completing call: {e}")
+ return False
+
+ def complete_call(self, call_id: str, response: Optional[str] = None, tool_calls: Optional[List[Dict[str, Any]]] = None) -> bool:
+ """Complete a call with either a response or tool calls."""
+ try:
+ response_data = {}
+ if response:
+ response_data["response"] = response
+ if tool_calls:
+ response_data["tool_calls"] = tool_calls
+
+ response_obj = requests.post(
+ f"{self.server_url}/complete/{call_id}",
+ json=response_data,
+ timeout=10
+ )
+ response_obj.raise_for_status()
+ return True
+ except requests.RequestException as e:
+ print(f"Error completing call: {e}")
+ return False
+
+ def get_last_image_from_messages(self, messages: List[Dict[str, Any]]) -> Optional[Any]:
+ """Extract the last image from the messages for display above conversation."""
+ last_image = None
+
+ for msg in reversed(messages): # Start from the last message
+ content = msg.get("content", "")
+
+ if isinstance(content, list):
+ for item in reversed(content): # Get the last image in the message
+ if item.get("type") == "image_url":
+ image_url = item.get("image_url", {}).get("url", "")
+ if image_url:
+ if image_url.startswith("data:image"):
+ # For base64 images, create a gr.Image component
+ try:
+ header, data = image_url.split(",", 1)
+ image_data = base64.b64decode(data)
+ image = Image.open(io.BytesIO(image_data))
+ return image
+ except Exception as e:
+ print(f"Error loading image: {e}")
+ continue
+ else:
+ # For URL images, return the URL
+ return image_url
+
+ return last_image
+
+ def refresh_pending_calls(self):
+ """Refresh the list of pending calls."""
+ pending_calls = self.get_pending_calls()
+
+ if not pending_calls:
+ return (
+ gr.update(choices=["latest"], value="latest"), # dropdown
+ gr.update(value=None), # image (no image)
+ gr.update(value=[]), # chatbot (empty messages)
+ gr.update(interactive=False) # submit button
+ )
+
+ # Sort pending calls by created_at to get oldest first
+ sorted_calls = sorted(pending_calls, key=lambda x: x.get("created_at", ""))
+
+ # Create choices for dropdown
+ choices = [("latest", "latest")] # Add "latest" option first
+
+ for call in sorted_calls:
+ call_id = call["id"]
+ model = call.get("model", "unknown")
+ created_at = call.get("created_at", "")
+ # Format timestamp
+ try:
+ dt = datetime.fromisoformat(created_at.replace('Z', '+00:00'))
+ time_str = dt.strftime("%H:%M:%S")
+ except:
+ time_str = created_at
+
+ choice_label = f"{call_id[:8]}... ({model}) - {time_str}"
+ choices.append((choice_label, call_id))
+
+ # Default to "latest" which shows the oldest pending conversation
+ selected_call_id = "latest"
+ if selected_call_id == "latest" and sorted_calls:
+ # Use the oldest call (first in sorted list)
+ selected_call = sorted_calls[0]
+ conversation = self.format_messages_for_chatbot(selected_call.get("messages", []))
+ self.current_call_id = selected_call["id"]
+ # Get the last image from messages
+ self.last_image = self.get_last_image_from_messages(selected_call.get("messages", []))
+ else:
+ conversation = []
+ self.current_call_id = None
+ self.last_image = None
+
+ return (
+ gr.update(choices=choices, value="latest"),
+ gr.update(value=self.last_image),
+ gr.update(value=conversation),
+ gr.update(interactive=bool(choices))
+ )
+
+ def on_call_selected(self, selected_choice):
+ """Handle when a call is selected from the dropdown."""
+ if not selected_choice:
+ return (
+ gr.update(value=None), # no image
+ gr.update(value=[]), # empty chatbot
+ gr.update(interactive=False)
+ )
+
+ pending_calls = self.get_pending_calls()
+ if not pending_calls:
+ return (
+ gr.update(value=None), # no image
+ gr.update(value=[]), # empty chatbot
+ gr.update(interactive=False)
+ )
+
+ # Handle "latest" option
+ if selected_choice == "latest":
+ # Sort calls by created_at to get oldest first
+ sorted_calls = sorted(pending_calls, key=lambda x: x.get("created_at", ""))
+ selected_call = sorted_calls[0] # Get the oldest call
+ call_id = selected_call["id"]
+ else:
+ # Extract call_id from the choice for specific calls
+ call_id = None
+ for call in pending_calls:
+ call_id_short = call["id"][:8]
+ if call_id_short in selected_choice:
+ call_id = call["id"]
+ break
+
+ if not call_id:
+ return (
+ gr.update(value=None), # no image
+ gr.update(value=[]), # empty chatbot
+ gr.update(interactive=False)
+ )
+
+ # Find the selected call
+ selected_call = next((c for c in pending_calls if c["id"] == call_id), None)
+
+ if not selected_call:
+ return (
+ gr.update(value=None), # no image
+ gr.update(value=[]), # empty chatbot
+ gr.update(interactive=False)
+ )
+
+ conversation = self.format_messages_for_chatbot(selected_call.get("messages", []))
+ self.current_call_id = call_id
+ # Get the last image from messages
+ self.last_image = self.get_last_image_from_messages(selected_call.get("messages", []))
+
+ return (
+ gr.update(value=self.last_image),
+ gr.update(value=conversation),
+ gr.update(interactive=True)
+ )
+
+ def submit_response(self, response_text: str):
+ """Submit a text response to the current call."""
+ if not self.current_call_id:
+ return (
+ gr.update(value=response_text), # keep response text
+ gr.update(value="❌ No call selected") # status
+ )
+
+ if not response_text.strip():
+ return (
+ gr.update(value=response_text), # keep response text
+ gr.update(value="❌ Response cannot be empty") # status
+ )
+
+ success = self.complete_call_with_response(self.current_call_id, response_text)
+
+ if success:
+ status_msg = "✅ Response submitted successfully!"
+ return (
+ gr.update(value=""), # clear response text
+ gr.update(value=status_msg) # status
+ )
+ else:
+ return (
+ gr.update(value=response_text), # keep response text
+ gr.update(value="❌ Failed to submit response") # status
+ )
+
+ def submit_action(self, action_type: str, **kwargs) -> str:
+ """Submit a computer action as a tool call."""
+ if not self.current_call_id:
+ return "❌ No call selected"
+
+ import uuid
+
+ # Create tool call structure
+ action_data = {"type": action_type, **kwargs}
+ tool_call = {
+ "id": f"call_{uuid.uuid4().hex[:24]}",
+ "type": "function",
+ "function": {
+ "name": "computer",
+ "arguments": json.dumps(action_data)
+ }
+ }
+
+ success = self.complete_call_with_tool_calls(self.current_call_id, [tool_call])
+
+ if success:
+ return f"✅ {action_type.capitalize()} action submitted as tool call"
+ else:
+ return f"❌ Failed to submit {action_type} action"
+
+ def submit_click_action(self, x: int, y: int, action_type: str = "click", button: str = "left") -> str:
+ """Submit a coordinate-based action."""
+ if action_type == "click":
+ return self.submit_action(action_type, x=x, y=y, button=button)
+ else:
+ return self.submit_action(action_type, x=x, y=y)
+
+ def submit_type_action(self, text: str) -> str:
+ """Submit a type action."""
+ return self.submit_action("type", text=text)
+
+ def submit_hotkey_action(self, keys: str) -> str:
+ """Submit a hotkey action."""
+ return self.submit_action("keypress", keys=keys)
+
+ def submit_description_click(self, description: str, action_type: str = "click", button: str = "left") -> str:
+ """Submit a description-based action."""
+ if action_type == "click":
+ return self.submit_action(action_type, element_description=description, button=button)
+ else:
+ return self.submit_action(action_type, element_description=description)
+
+ def wait_for_pending_calls(self, max_seconds: float = 10.0, check_interval: float = 0.2):
+ """Wait for pending calls to appear or until max_seconds elapsed.
+
+ This method loops and checks for pending calls at regular intervals,
+ returning as soon as a pending call is found or the maximum wait time is reached.
+
+ Args:
+ max_seconds: Maximum number of seconds to wait
+ check_interval: How often to check for pending calls (in seconds)
+ """
+ import time
+
+ start_time = time.time()
+
+ while time.time() - start_time < max_seconds:
+ # Check if there are any pending calls
+ pending_calls = self.get_pending_calls()
+ if pending_calls:
+ # Found pending calls, return immediately
+ return self.refresh_pending_calls()
+
+ # Wait before checking again
+ time.sleep(check_interval)
+
+ # Max wait time reached, return current state
+ return self.refresh_pending_calls()
+
+
+def create_ui():
+ """Create the Gradio interface."""
+ ui_handler = HumanCompletionUI()
+
+ with gr.Blocks(title="Human-in-the-Loop Agent Tool") as demo:
+ gr.Markdown("# 🤖 Human-in-the-Loop Agent Tool")
+ gr.Markdown("Review AI conversation requests and provide human responses.")
+
+ with gr.Row():
+ with gr.Column(scale=2):
+ with gr.Group():
+ screenshot_image = gr.Image(
+ label="Screenshot",
+ interactive=False,
+ height=600
+ )
+
+ # Action type selection for image clicks
+ with gr.Row():
+ action_type_radio = gr.Radio(
+ label="Action Type",
+ choices=["click", "double_click", "move", "left_mouse_up", "left_mouse_down"],
+ value="click",
+ scale=2
+ )
+ action_button_radio = gr.Radio(
+ label="Button (for click only)",
+ choices=["left", "right", "wheel", "back", "forward"],
+ value="left",
+ visible=True,
+ scale=1
+ )
+
+ conversation_chatbot = gr.Chatbot(
+ label="Messages",
+ type="messages",
+ height=500,
+ show_copy_button=True
+ )
+
+ with gr.Column(scale=1):
+ with gr.Group():
+ call_dropdown = gr.Dropdown(
+ label="Select a pending call",
+ choices=["latest"],
+ interactive=True,
+ value="latest"
+ )
+ refresh_btn = gr.Button("🔄 Refresh", variant="secondary")
+
+ with gr.Group():
+ response_text = gr.Textbox(
+ label="Response",
+ lines=3,
+ placeholder="Enter your response here..."
+ )
+ submit_btn = gr.Button("📤 Submit Response", variant="primary", interactive=False)
+
+ # Action Accordions
+ with gr.Accordion("🖱️ Click Actions", open=False):
+ with gr.Group():
+ with gr.Row():
+ click_x = gr.Number(label="X", value=0, minimum=0)
+ click_y = gr.Number(label="Y", value=0, minimum=0)
+ with gr.Row():
+ click_action_type = gr.Dropdown(
+ label="Action Type",
+ choices=["click", "double_click", "move", "left_mouse_up", "left_mouse_down"],
+ value="click"
+ )
+ click_button = gr.Dropdown(
+ label="Button (for click only)",
+ choices=["left", "right", "wheel", "back", "forward"],
+ value="left"
+ )
+ click_submit_btn = gr.Button("Submit Action")
+
+ with gr.Accordion("📝 Type Action", open=False):
+ with gr.Group():
+ type_text = gr.Textbox(
+ label="Text to Type",
+ placeholder="Enter text to type..."
+ )
+ type_submit_btn = gr.Button("Submit Type")
+
+ with gr.Accordion("⌨️ Keypress Action", open=False):
+ with gr.Group():
+ keypress_text = gr.Textbox(
+ label="Keys",
+ placeholder="e.g., ctrl+c, alt+tab"
+ )
+ keypress_submit_btn = gr.Button("Submit Keypress")
+
+ with gr.Accordion("🎯 Description Action", open=False):
+ with gr.Group():
+ description_text = gr.Textbox(
+ label="Element Description",
+ placeholder="e.g., 'Privacy and security option in left sidebar'"
+ )
+ with gr.Row():
+ description_action_type = gr.Dropdown(
+ label="Action Type",
+ choices=["click", "double_click", "move", "left_mouse_up", "left_mouse_down"],
+ value="click"
+ )
+ description_button = gr.Radio(
+ label="Button (for click only)",
+ choices=["left", "right", "wheel", "back", "forward"],
+ value="left"
+ )
+ description_submit_btn = gr.Button("Submit Description Action")
+
+ status_display = gr.Textbox(
+ label="Status",
+ interactive=False,
+ value="Ready to receive calls..."
+ )
+
+ # Event handlers
+ refresh_btn.click(
+ fn=ui_handler.refresh_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ call_dropdown.change(
+ fn=ui_handler.on_call_selected,
+ inputs=[call_dropdown],
+ outputs=[screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ def handle_image_click(evt: gr.SelectData):
+ if evt.index is not None:
+ x, y = evt.index
+ action_type = action_type_radio.value or "click"
+ button = action_button_radio.value or "left"
+ result = ui_handler.submit_click_action(x, y, action_type, button)
+ ui_handler.wait_for_pending_calls()
+ return result
+ return "No coordinates selected"
+
+ screenshot_image.select(
+ fn=handle_image_click,
+ outputs=[status_display]
+ ).then(
+ fn=ui_handler.wait_for_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ # Response submission
+ submit_btn.click(
+ fn=ui_handler.submit_response,
+ inputs=[response_text],
+ outputs=[response_text, status_display]
+ ).then(
+ fn=ui_handler.refresh_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ # Toggle button radio visibility based on action type
+ def toggle_button_visibility(action_type):
+ return gr.update(visible=(action_type == "click"))
+
+ action_type_radio.change(
+ fn=toggle_button_visibility,
+ inputs=[action_type_radio],
+ outputs=[action_button_radio]
+ )
+
+ # Action accordion handlers
+ click_submit_btn.click(
+ fn=ui_handler.submit_click_action,
+ inputs=[click_x, click_y, click_action_type, click_button],
+ outputs=[status_display]
+ ).then(
+ fn=ui_handler.wait_for_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ type_submit_btn.click(
+ fn=ui_handler.submit_type_action,
+ inputs=[type_text],
+ outputs=[status_display]
+ ).then(
+ fn=ui_handler.wait_for_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ keypress_submit_btn.click(
+ fn=ui_handler.submit_hotkey_action,
+ inputs=[keypress_text],
+ outputs=[status_display]
+ ).then(
+ fn=ui_handler.wait_for_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ def handle_description_submit(description, action_type, button):
+ if description:
+ result = ui_handler.submit_description_click(description, action_type, button)
+ ui_handler.wait_for_pending_calls()
+ return result
+ return "Please enter a description"
+
+ description_submit_btn.click(
+ fn=handle_description_submit,
+ inputs=[description_text, description_action_type, description_button],
+ outputs=[status_display]
+ ).then(
+ fn=ui_handler.wait_for_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ # Load initial data
+ demo.load(
+ fn=ui_handler.refresh_pending_calls,
+ outputs=[call_dropdown, screenshot_image, conversation_chatbot, submit_btn]
+ )
+
+ return demo
+
+
+if __name__ == "__main__":
+ demo = create_ui()
+ demo.queue()
+ demo.launch(server_name="0.0.0.0", server_port=7860)
diff --git a/libs/python/agent/agent/integrations/hud/__init__.py b/libs/python/agent/agent/integrations/hud/__init__.py
new file mode 100644
index 00000000..787613de
--- /dev/null
+++ b/libs/python/agent/agent/integrations/hud/__init__.py
@@ -0,0 +1,77 @@
+"""HUD integration for ComputerAgent."""
+
+import logging
+from typing import Any, Optional, Dict
+from hud import run_job as hud_run_job
+
+from .agent import ComputerAgent
+from .adapter import ComputerAgentAdapter
+from .computer_handler import HUDComputerHandler
+
+
+async def run_job(
+ model: str,
+ task_or_taskset: Any,
+ job_name: str,
+ # Job kwargs
+ auto_reply_question: bool = False,
+ adapter_cls: Any = None,
+ adapter_kwargs: Optional[Dict[str, Any]] = None,
+ max_steps_per_task: int = 20,
+ run_parallel: bool = True,
+ job_metadata: Optional[Dict[str, Any]] = None,
+ show_progress: bool = True,
+ max_concurrent_env_creations: Optional[int] = 30, # Limits gym.make calls
+ max_concurrent_agent_predictions: Optional[int] = None, # No limit on LLM calls
+ max_concurrent_tasks: Optional[int] = 30, # Limits overall task concurrency
+ **agent_kwargs: Any
+) -> Any:
+ """
+ Run a job using ComputerAgent with the specified model.
+
+ Args:
+ model: Model string for ComputerAgent (e.g., "anthropic/claude-3-5-sonnet-20241022")
+ task_or_taskset: Task or TaskSet to run
+ job_name: Name for the job
+ auto_reply_question: Whether to auto-reply to questions
+ adapter_cls: Custom adapter class (defaults to ComputerAgentAdapter)
+ adapter_kwargs: Additional kwargs for the adapter
+ max_steps_per_task: Maximum steps per task
+ run_parallel: Whether to run tasks in parallel
+ job_metadata: Additional metadata for the job
+ show_progress: Whether to show progress
+ max_concurrent_env_creations: Max concurrent environment creations
+ max_concurrent_agent_predictions: Max concurrent agent predictions
+ max_concurrent_tasks: Max concurrent tasks
+ **agent_kwargs: Additional kwargs to pass to ComputerAgent
+
+ Returns:
+ Job instance from HUD
+ """
+ # combine verbose and verbosity kwargs
+ if "verbose" in agent_kwargs:
+ agent_kwargs["verbosity"] = logging.INFO
+ del agent_kwargs["verbose"]
+ verbose = True if agent_kwargs.get("verbosity", logging.WARNING) > logging.INFO else False
+
+ # run job
+ return await hud_run_job(
+ agent_cls=ComputerAgent,
+ agent_kwargs={"model": model, **agent_kwargs},
+ task_or_taskset=task_or_taskset,
+ job_name=job_name,
+ auto_reply_question=auto_reply_question,
+ adapter_cls=adapter_cls,
+ adapter_kwargs=adapter_kwargs,
+ max_steps_per_task=max_steps_per_task,
+ run_parallel=run_parallel,
+ job_metadata=job_metadata,
+ show_progress=show_progress,
+ verbose=verbose,
+ max_concurrent_env_creations=max_concurrent_env_creations,
+ max_concurrent_agent_predictions=max_concurrent_agent_predictions,
+ max_concurrent_tasks=max_concurrent_tasks
+ )
+
+
+__all__ = ["ComputerAgent", "ComputerAgentAdapter", "HUDComputerHandler", "run_job"]
\ No newline at end of file
diff --git a/libs/python/agent/agent/integrations/hud/adapter.py b/libs/python/agent/agent/integrations/hud/adapter.py
new file mode 100644
index 00000000..77c8dc7d
--- /dev/null
+++ b/libs/python/agent/agent/integrations/hud/adapter.py
@@ -0,0 +1,121 @@
+"""HUD Adapter for ComputerAgent integration."""
+
+from __future__ import annotations
+
+from typing import Any, ClassVar
+
+from hud.adapters.common import CLA, Adapter
+from hud.adapters.common.types import (
+ CLAButton,
+ CLAKey,
+ ClickAction,
+ CustomAction,
+ DragAction,
+ MoveAction,
+ Point,
+ PressAction,
+ ResponseAction,
+ ScreenshotFetch,
+ ScrollAction,
+ TypeAction,
+ WaitAction,
+)
+
+
+class ComputerAgentAdapter(Adapter):
+ """Adapter for ComputerAgent to work with HUD."""
+
+ KEY_MAP: ClassVar[dict[str, CLAKey]] = {
+ "return": "enter",
+ "arrowup": "up",
+ "arrowdown": "down",
+ "arrowleft": "left",
+ "arrowright": "right",
+ "cmd": "ctrl",
+ "super": "win",
+ "meta": "win",
+ }
+
+ BUTTON_MAP: ClassVar[dict[str, CLAButton]] = {
+ "wheel": "middle",
+ "middle": "middle",
+ }
+
+ def __init__(self) -> None:
+ super().__init__()
+ # ComputerAgent default dimensions (can be overridden)
+ self.agent_width = 1024
+ self.agent_height = 768
+
+ def _map_key(self, key: str) -> CLAKey:
+ """Map a key to its standardized form."""
+ return self.KEY_MAP.get(key.lower(), key.lower()) # type: ignore
+
+ def convert(self, data: Any) -> CLA:
+ """Convert a ComputerAgent action to a HUD action."""
+ try:
+ action_type = data.get("type")
+
+ if action_type == "click":
+ x, y = data.get("x", 0), data.get("y", 0)
+ button = data.get("button", "left")
+ button = self.BUTTON_MAP.get(button, button)
+ if button is None:
+ button = "left"
+ converted_action = ClickAction(point=Point(x=x, y=y), button=button)
+
+ elif action_type == "double_click":
+ x, y = data.get("x", 0), data.get("y", 0)
+ converted_action = ClickAction(point=Point(x=x, y=y), button="left", pattern=[100])
+
+ elif action_type == "scroll":
+ x, y = int(data.get("x", 0)), int(data.get("y", 0))
+ scroll_x = int(data.get("scroll_x", 0))
+ scroll_y = int(data.get("scroll_y", 0))
+ converted_action = ScrollAction(
+ point=Point(x=x, y=y), scroll=Point(x=scroll_x, y=scroll_y)
+ )
+
+ elif action_type == "type":
+ text = data.get("text", "")
+ converted_action = TypeAction(text=text, enter_after=False)
+
+ elif action_type == "wait":
+ ms = data.get("ms", 1000)
+ converted_action = WaitAction(time=ms)
+
+ elif action_type == "move":
+ x, y = data.get("x", 0), data.get("y", 0)
+ converted_action = MoveAction(point=Point(x=x, y=y))
+
+ elif action_type == "keypress":
+ keys = data.get("keys", [])
+ if isinstance(keys, str):
+ keys = [keys]
+ converted_action = PressAction(keys=[self._map_key(k) for k in keys])
+
+ elif action_type == "drag":
+ path = data.get("path", [])
+ points = [Point(x=p.get("x", 0), y=p.get("y", 0)) for p in path]
+ converted_action = DragAction(path=points)
+
+ elif action_type == "screenshot":
+ converted_action = ScreenshotFetch()
+
+ elif action_type == "response":
+ converted_action = ResponseAction(text=data.get("text", ""))
+
+ elif action_type == "custom":
+ converted_action = CustomAction(action=data.get("action", ""))
+
+ else:
+ raise ValueError(f"Unsupported action type: {action_type}")
+
+ # Add reasoning and logs if available
+ converted_action.reasoning = data.get("reasoning", "")
+ converted_action.logs = data.get("logs", "")
+
+ return converted_action
+
+ except Exception as e:
+ raise ValueError(f"Invalid action: {data}. Error: {e!s}") from e
diff --git a/libs/python/agent/agent/integrations/hud/agent.py b/libs/python/agent/agent/integrations/hud/agent.py
new file mode 100644
index 00000000..abbf5f8c
--- /dev/null
+++ b/libs/python/agent/agent/integrations/hud/agent.py
@@ -0,0 +1,373 @@
+"""HUD ComputerAgent wrapper for OSWorld benchmarking."""
+
+import logging
+from typing import Any, Literal, Optional, Union, List, Dict
+import asyncio
+
+from agent import ComputerAgent as BaseComputerAgent
+from agent.responses import make_failed_tool_call_items
+from hud.adapters import Adapter
+from hud.agent.base import Agent
+from hud.utils.common import Observation
+from hud.adapters.common.types import LogType
+from hud.types import Gym
+
+from .adapter import ComputerAgentAdapter
+from .computer_handler import HUDComputerHandler
+
+logger = logging.getLogger(__name__)
+
+BASE_SYSTEM_PROMPT = """
+You are an autonomous computer-using agent. Follow these guidelines:
+
+1. Be decisive and complete tasks without asking for confirmation unless absolutely necessary.
+2. Use the computer tools to complete the task and do not stop until the task is complete.
+3. Do NOT ask questions like "Should I proceed?" or "Would you like me to continue?" - just proceed with the task.
+4. When you find what you're looking for (e.g., a file to upload), proceed with the action directly.
+5. Only stop when the task is fully complete or if you encounter an error that prevents completion.
+6. Trust that the user wants you to complete the entire task they've requested.
+7. You must say "Task completed" when the task is complete.
+
+Remember: You have been given permission to complete the requested task autonomously.
+""".strip()
+
+class ComputerAgent(Agent[BaseComputerAgent, dict[str, Any]]):
+ """
+ A ComputerAgent wrapper for HUD integration.
+
+ This agent wraps the base ComputerAgent to work with HUD environments,
+ providing the same interface as OperatorAgent but using ComputerAgent internally.
+ """
+
+ transfer_gyms: dict[Gym, Gym] = {"qa": "hud-browser"}
+
+ def __init__(
+ self,
+ model: str = "anthropic/claude-3-5-sonnet-20241022",
+ environment: Literal["windows", "mac", "linux", "browser"] = "linux",
+ adapter: Optional[Adapter] = None,
+ name: Optional[str] = None,
+ **kwargs: Any,
+ ):
+ """
+ Initialize the ComputerAgent for HUD.
+
+ Args:
+ model: The model string for ComputerAgent (e.g., "anthropic/claude-3-5-sonnet-20241022")
+ environment: The environment type (windows, mac, linux, browser)
+ adapter: The adapter to use for preprocessing and postprocessing
+ name: The name of the agent
+ **kwargs: Additional arguments passed to ComputerAgent
+ """
+ # Create adapter if not provided
+ adapter = adapter or ComputerAgentAdapter()
+
+ if name is None:
+ name = f"computeragent-{model.split('/')[-1]}"
+
+ # Initialize the base Agent class without client (we'll create it later)
+ super().__init__(client=None, adapter=adapter, name=name)
+
+ self.model = model
+ self.environment = environment
+ self.kwargs = kwargs
+
+ # Default dimensions
+ self.width = 1024
+ self.height = 768
+
+ # Update dimensions if adapter is provided
+ if self.adapter:
+ self.width = self.adapter.agent_width
+ self.height = self.adapter.agent_height
+
+ # Create HUD computer handler
+ self.hud_computer = HUDComputerHandler(
+ environment=environment,
+ dimensions=(self.width, self.height)
+ )
+
+ # Handle trajectory_dir by adding TrajectorySaverCallback
+ trajectory_dir = kwargs.pop("trajectory_dir", None)
+ callbacks = kwargs.get("callbacks", [])
+
+ if trajectory_dir:
+ from agent.callbacks.trajectory_saver import TrajectorySaverCallback
+ trajectory_callback = TrajectorySaverCallback(trajectory_dir, reset_on_run=False)
+ callbacks = callbacks + [trajectory_callback]
+ kwargs["callbacks"] = callbacks
+
+ # Initialize ComputerAgent with HUD computer handler
+ self.computer_agent = BaseComputerAgent(
+ model=model,
+ tools=[self.hud_computer],
+ **kwargs
+ )
+
+ # Set the client to the computer_agent for compatibility
+ self.client = self.computer_agent
+
+ # State tracking
+ self.conversation_history: List[Dict[str, Any]] = []
+ self.initial_prompt: Optional[str] = None
+
+ # System prompt for computer use tasks
+ self.base_system_prompt = BASE_SYSTEM_PROMPT
+
+ async def fetch_response(self, observation: Observation) -> tuple[list[dict[str, Any]], bool]:
+ """
+ Fetch a response from ComputerAgent based on the observation.
+
+ Args:
+ observation: The preprocessed observation, attributes:
+ screenshot: Base64 encoded PNG string of the screen
+ text: Text observation, if available
+
+ Returns:
+ tuple[list[dict[str, Any]], bool, list[LogType] | None]: A tuple containing the list of raw actions,
+ boolean indicating if the agent believes the task is complete.
+ """
+ try:
+ # Update the computer handler with the current screenshot
+ if observation.screenshot:
+ self.hud_computer.update_screenshot(observation.screenshot)
+
+ # Set up action callback to capture actions
+ captured_actions = []
+ action_done = False
+
+ async def action_callback(action: Dict[str, Any]) -> None:
+ """Callback to capture actions from ComputerAgent."""
+ nonlocal captured_actions, action_done
+ captured_actions.append(action)
+
+ # Set the action callback
+ self.hud_computer.set_action_callback(action_callback)
+
+ # Prepare the message for ComputerAgent
+ if not self.conversation_history:
+ # First interaction - use the observation text as initial prompt
+ if observation.text:
+ self.initial_prompt = observation.text
+ message = f"{self.base_system_prompt}\n\nTask: {observation.text}"
+ else:
+ message = f"{self.base_system_prompt}\n\nPlease analyze the current screen and determine what action to take."
+
+ input_content = [
+ {"type": "input_text", "text": message}
+ ]
+
+ # Add screenshot if present
+ if observation.screenshot:
+ input_content.append(
+ {
+ "type": "input_image",
+ "image_url": f"data:image/png;base64,{observation.screenshot}",
+ }
+ )
+
+ self.conversation_history.append({"role": "user", "content": input_content})
+ else:
+ # Subsequent interactions - check if last action was computer_call
+ # If so, add computer_call_output with screenshot instead of user message
+ last_computer_calls = []
+ for msg in reversed(self.conversation_history):
+ if msg.get("type") == "computer_call":
+ call_id = msg.get("call_id")
+ if call_id:
+ # Check if this call_id already has a computer_call_output
+ has_output = any(
+ m.get("type") == "computer_call_output" and m.get("call_id") == call_id
+ for m in self.conversation_history
+ )
+ if not has_output:
+ last_computer_calls.append(call_id)
+
+ if last_computer_calls:
+ if not observation.screenshot:
+ print("No screenshot found, taking screenshot")
+ screenshot_b64 = await self.hud_computer.screenshot()
+ # Add computer_call_output for each unresponded computer_call
+ for call_id in reversed(last_computer_calls): # Maintain order
+ self.conversation_history.append({
+ "type": "computer_call_output",
+ "call_id": call_id,
+ "output": {
+ "type": "input_image",
+ "image_url": f"data:image/png;base64,{screenshot_b64}"
+ }
+ })
+ else:
+ # No computer_call found, add regular user message
+ message = "Continue with the task based on the current screen state."
+ input_content = [
+ {"type": "input_text", "text": message}
+ ]
+
+ # Add screenshot if present
+ if observation.screenshot:
+ input_content.append(
+ {
+ "type": "input_image",
+ "image_url": f"data:image/png;base64,{observation.screenshot}",
+ }
+ )
+
+ self.conversation_history.append({"role": "user", "content": input_content})
+
+ # If the last message is a reasoning message, change it to output_text
+ if (self.conversation_history and
+ self.conversation_history[-1].get("type") == "reasoning" and
+ self.conversation_history[-1].get("summary")):
+
+ reasoning_msg = self.conversation_history[-1]
+ summary_texts = []
+
+ # Extract all summary_text entries
+ for summary_item in reasoning_msg["summary"]:
+ if summary_item.get("type") == "summary_text":
+ summary_texts.append(summary_item.get("text", ""))
+
+ # Convert to message format with output_text
+ if summary_texts:
+ converted_message = {
+ "type": "message",
+ "role": "assistant",
+ "content": [
+ {
+ "text": " ".join(summary_texts),
+ "type": "output_text"
+ }
+ ]
+ }
+
+ # Replace the reasoning message with the converted message
+ self.conversation_history[-1] = converted_message
+
+ # Run ComputerAgent
+ try:
+ new_items = []
+
+ # ComputerAgent.run returns an async generator
+ try:
+ async for result in self.computer_agent.run(self.conversation_history, stream=False):
+ # if the result has computer_call_output, immediately exit
+ if result.get("output", []) and result.get("output", [])[-1].get("type") == "computer_call_output":
+ break
+ # otherwise add agent output to conversation history
+ new_items += result["output"]
+ except Exception as e:
+ # if the last message is reasoning, change it to output_text
+ if new_items and new_items[-1].get("type") == "reasoning":
+ new_items[-1] = {
+ "type": "message",
+ "role": "assistant",
+ "content": [
+ {
+ "text": new_items[-1].get("summary", [{}])[0].get("text", ""),
+ "type": "output_text"
+ }
+ ]
+ }
+ # Check if there are any computer_call items in new_items
+ computer_calls = [item for item in new_items if item.get("type") == "computer_call"]
+ if computer_calls:
+ # Remove computer_call items from new_items
+ new_items = [item for item in new_items if item.get("type") != "computer_call"]
+
+ # Add failed tool call items for each computer call
+ for computer_call in computer_calls:
+ tool_input = computer_call.get("action", {})
+ call_id = computer_call.get("call_id")
+ new_items.extend(make_failed_tool_call_items(
+ tool_name="computer",
+ tool_kwargs=tool_input,
+ error_message=repr(e),
+ call_id=call_id
+ ))
+ else:
+ # add error message to conversation history (fallback for non-computer-call errors)
+ new_items.append({
+ "type": "user",
+ "content": [
+ {
+ "type": "input_text",
+ "text": f"Error during previous attempted action: {repr(e)}"
+ }
+ ]
+ })
+
+ # Check if we captured any actions
+ if captured_actions:
+ # Extract reasoning from the conversation history
+ reasoning = ""
+ # Look for the latest reasoning message
+ for msg in reversed(new_items):
+ if msg.get("type") == "reasoning" and msg.get("summary"):
+ reasoning = " ".join([s.get("text", "") for s in msg["summary"] if s.get("type") == "summary_text"])
+ break
+ elif msg.get("type") == "message" and msg.get("role") == "assistant":
+ content = msg.get("content", [])
+ if isinstance(content, list):
+ reasoning = " ".join([c.get("text", "") for c in content if c.get("type") == "output_text"])
+ break
+
+ # update conversation history
+ self.conversation_history += new_items
+
+ # Add reasoning and logs to each action
+ for action in captured_actions:
+ action["reasoning"] = reasoning
+ action["logs"] = {"conversation_length": len(self.conversation_history)}
+
+ return captured_actions, False
+
+ # Check if the last message is "Task completed"
+ response_text = ""
+ for msg in reversed(new_items):
+ if msg.get("type") == "message" and msg.get("role") == "assistant":
+ content = msg.get("content", [])
+ for c in content:
+ if c.get("type") == "output_text":
+ response_text = c.get("text", response_text)
+ break
+ break
+
+ done = "task completed" in response_text.lower()
+
+ # update conversation history
+ self.conversation_history += new_items
+
+ response_action = {
+ "type": "response",
+ "text": response_text,
+ "reasoning": response_text,
+ "logs": {"conversation_length": len(self.conversation_history)}
+ }
+
+ # Check if this indicates task completion or failure
+ if "task is infeasible" in response_text.lower():
+ response_action = {"type": "custom", "action": "FAIL"}
+ done = True
+
+ return [response_action], done
+ except Exception as e:
+ logger.error(f"Error running ComputerAgent: {e}")
+ # Return an error response
+ error_action = {
+ "type": "response",
+ "text": f"Error occurred: {str(e)}",
+ "reasoning": f"ComputerAgent encountered an error: {str(e)}",
+ "logs": {"error": str(e)}
+ }
+ return [error_action], True
+
+ except Exception as e:
+ logger.error(f"Error in fetch_response: {e}")
+ error_action = {
+ "type": "response",
+ "text": f"Error in agent processing: {str(e)}",
+ "reasoning": f"Agent processing error: {str(e)}",
+ "logs": {"error": str(e)}
+ }
+ return [error_action], True
diff --git a/libs/python/agent/agent/integrations/hud/computer_handler.py b/libs/python/agent/agent/integrations/hud/computer_handler.py
new file mode 100644
index 00000000..9fcc8245
--- /dev/null
+++ b/libs/python/agent/agent/integrations/hud/computer_handler.py
@@ -0,0 +1,187 @@
+"""HUD Computer Handler for ComputerAgent integration."""
+
+import base64
+from io import BytesIO
+from typing import Literal, Optional, Any, Dict, Callable
+from PIL import Image
+
+from agent.computers import AsyncComputerHandler
+
+
+class HUDComputerHandler(AsyncComputerHandler):
+ """Computer handler that interfaces with HUD environment."""
+
+ def __init__(
+ self,
+ environment: Literal["windows", "mac", "linux", "browser"] = "linux",
+ dimensions: tuple[int, int] = (1024, 768),
+ screenshot_callback: Optional[Callable] = None,
+ action_callback: Optional[Callable] = None,
+ ):
+ """
+ Initialize HUD computer handler.
+
+ Args:
+ environment: The environment type for HUD
+ dimensions: Screen dimensions as (width, height)
+ screenshot_callback: Optional callback to get screenshots from HUD environment
+ action_callback: Optional callback to execute actions in HUD environment
+ """
+ super().__init__()
+ self._environment = environment
+ self._dimensions = dimensions
+ self._screenshot_callback = screenshot_callback
+ self._action_callback = action_callback
+
+ # Store the last screenshot for reuse
+ self._last_screenshot: Optional[str] = None
+
+ def set_screenshot_callback(self, callback: Callable) -> None:
+ """Set the screenshot callback."""
+ self._screenshot_callback = callback
+
+ def set_action_callback(self, callback: Callable) -> None:
+ """Set the action callback."""
+ self._action_callback = callback
+
+ def update_screenshot(self, screenshot: str) -> None:
+ """Update the stored screenshot (base64 string)."""
+ self._last_screenshot = screenshot
+
+ async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
+ """Get the current environment type."""
+ return self._environment # type: ignore
+
+ async def get_dimensions(self) -> tuple[int, int]:
+ """Get screen dimensions as (width, height)."""
+ return self._dimensions
+
+ async def screenshot(self) -> str:
+ """Take a screenshot and return as base64 string."""
+ if self._screenshot_callback:
+ screenshot = await self._screenshot_callback()
+ if isinstance(screenshot, str):
+ self._last_screenshot = screenshot
+ return screenshot
+ elif isinstance(screenshot, Image.Image):
+ # Convert PIL Image to base64
+ buffer = BytesIO()
+ screenshot.save(buffer, format="PNG")
+ screenshot_b64 = base64.b64encode(buffer.getvalue()).decode()
+ self._last_screenshot = screenshot_b64
+ return screenshot_b64
+ elif isinstance(screenshot, bytes):
+ screenshot_b64 = base64.b64encode(screenshot).decode()
+ self._last_screenshot = screenshot_b64
+ return screenshot_b64
+
+ # Return last screenshot if available, otherwise create a blank one
+ if self._last_screenshot:
+ return self._last_screenshot
+
+ # Create a blank screenshot as fallback
+ blank_image = Image.new('RGB', self._dimensions, color='white')
+ buffer = BytesIO()
+ blank_image.save(buffer, format="PNG")
+ screenshot_b64 = base64.b64encode(buffer.getvalue()).decode()
+ self._last_screenshot = screenshot_b64
+ return screenshot_b64
+
+ async def click(self, x: int, y: int, button: str = "left") -> None:
+ """Click at coordinates with specified button."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "click",
+ "x": x,
+ "y": y,
+ "button": button
+ })
+
+ async def double_click(self, x: int, y: int) -> None:
+ """Double click at coordinates."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "double_click",
+ "x": x,
+ "y": y
+ })
+
+ async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
+ """Scroll at coordinates with specified scroll amounts."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "scroll",
+ "x": x,
+ "y": y,
+ "scroll_x": scroll_x,
+ "scroll_y": scroll_y
+ })
+
+ async def type(self, text: str) -> None:
+ """Type text."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "type",
+ "text": text
+ })
+
+ async def wait(self, ms: int = 1000) -> None:
+ """Wait for specified milliseconds."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "wait",
+ "ms": ms
+ })
+
+ async def move(self, x: int, y: int) -> None:
+ """Move cursor to coordinates."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "move",
+ "x": x,
+ "y": y
+ })
+
+ async def keypress(self, keys: list[str] | str) -> None:
+ """Press key combination."""
+ if isinstance(keys, str):
+ keys = [keys]
+ if self._action_callback:
+ await self._action_callback({
+ "type": "keypress",
+ "keys": keys
+ })
+
+ async def drag(self, path: list[dict[str, int]]) -> None:
+ """Drag along a path of points."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "drag",
+ "path": path
+ })
+
+ async def left_mouse_down(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse down at coordinates."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "left_mouse_down",
+ "x": x,
+ "y": y
+ })
+
+ async def left_mouse_up(self, x: Optional[int] = None, y: Optional[int] = None) -> None:
+ """Left mouse up at coordinates."""
+ if self._action_callback:
+ await self._action_callback({
+ "type": "left_mouse_up",
+ "x": x,
+ "y": y
+ })
+
+ async def get_current_url(self) -> str:
+ """Get the current URL."""
+ if self._action_callback:
+ return await self._action_callback({
+ "type": "get_current_url"
+ })
+ return ""
\ No newline at end of file
diff --git a/libs/python/agent/agent/loops/__init__.py b/libs/python/agent/agent/loops/__init__.py
index aa159411..45f70e20 100644
--- a/libs/python/agent/agent/loops/__init__.py
+++ b/libs/python/agent/agent/loops/__init__.py
@@ -7,5 +7,8 @@ from . import anthropic
from . import openai
from . import uitars
from . import omniparser
+from . import gta1
+from . import composed_grounded
+from . import glm45v
-__all__ = ["anthropic", "openai", "uitars", "omniparser"]
+__all__ = ["anthropic", "openai", "uitars", "omniparser", "gta1", "composed_grounded", "glm45v"]
diff --git a/libs/python/agent/agent/loops/anthropic.py b/libs/python/agent/agent/loops/anthropic.py
index 02ac1c29..50fbd24e 100644
--- a/libs/python/agent/agent/loops/anthropic.py
+++ b/libs/python/agent/agent/loops/anthropic.py
@@ -4,12 +4,13 @@ Anthropic hosted tools agent loop implementation using liteLLM
import asyncio
import json
-from typing import Dict, List, Any, AsyncGenerator, Union, Optional
+from typing import Dict, List, Any, AsyncGenerator, Union, Optional, Tuple
import litellm
from litellm.responses.litellm_completion_transformation.transformation import LiteLLMCompletionResponsesConfig
-from ..decorators import agent_loop
-from ..types import Messages, AgentResponse, Tools
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
+from ..loops.base import AsyncAgentConfig
from ..responses import (
make_reasoning_item,
make_output_text_item,
@@ -22,7 +23,10 @@ from ..responses import (
make_type_item,
make_wait_item,
make_input_image_item,
- make_screenshot_item
+ make_screenshot_item,
+ make_failed_tool_call_items,
+ make_left_mouse_down_item,
+ make_left_mouse_up_item
)
# Model version mapping to tool version and beta flag
@@ -64,21 +68,28 @@ def _get_tool_config_for_model(model: str) -> Dict[str, str]:
"beta_flag": "computer-use-2024-10-22"
}
-def _map_computer_tool_to_anthropic(computer_tool: Any, tool_version: str) -> Dict[str, Any]:
+async def _map_computer_tool_to_anthropic(computer_tool: Any, tool_version: str) -> Dict[str, Any]:
"""Map a computer tool to Anthropic's hosted tool schema."""
+ # Get dimensions from the computer handler
+ try:
+ width, height = await computer_tool.get_dimensions()
+ except Exception:
+ # Fallback to default dimensions if method fails
+ width, height = 1024, 768
+
return {
"type": tool_version,
"function": {
"name": "computer",
"parameters": {
- "display_height_px": getattr(computer_tool, 'display_height', 768),
- "display_width_px": getattr(computer_tool, 'display_width', 1024),
- "display_number": getattr(computer_tool, 'display_number', 1),
+ "display_height_px": height,
+ "display_width_px": width,
+ "display_number": 1,
},
},
}
-def _prepare_tools_for_anthropic(tool_schemas: List[Dict[str, Any]], model: str) -> Tools:
+async def _prepare_tools_for_anthropic(tool_schemas: List[Dict[str, Any]], model: str) -> Tools:
"""Prepare tools for Anthropic API format."""
tool_config = _get_tool_config_for_model(model)
anthropic_tools = []
@@ -86,7 +97,7 @@ def _prepare_tools_for_anthropic(tool_schemas: List[Dict[str, Any]], model: str)
for schema in tool_schemas:
if schema["type"] == "computer":
# Map computer tool to Anthropic format
- anthropic_tools.append(_map_computer_tool_to_anthropic(
+ anthropic_tools.append(await _map_computer_tool_to_anthropic(
schema["computer"],
tool_config["tool_version"]
))
@@ -107,7 +118,8 @@ def _prepare_tools_for_anthropic(tool_schemas: List[Dict[str, Any]], model: str)
def _convert_responses_items_to_completion_messages(messages: Messages) -> List[Dict[str, Any]]:
"""Convert responses_items message format to liteLLM completion format."""
completion_messages = []
-
+ call_id_to_fn_name = {}
+
for message in messages:
msg_type = message.get("type")
role = message.get("role")
@@ -185,6 +197,43 @@ def _convert_responses_items_to_completion_messages(messages: Messages) -> List[
"content": reasoning_text
})
+ elif msg_type == "function_call":
+ fn_name = message.get("name")
+ fn_args = message.get("arguments", "{}")
+ call_id = message.get("call_id", "call_1")
+ call_id_to_fn_name[call_id] = fn_name
+ openai_tool_calls = [{
+ "id": call_id,
+ "type": "function",
+ "function": {
+ "name": fn_name,
+ "arguments": fn_args
+ }
+ }] # If the last completion message is an assistant message, extend the tool_calls
+ if completion_messages and completion_messages[-1].get("role") == "assistant":
+ if "tool_calls" not in completion_messages[-1]:
+ completion_messages[-1]["tool_calls"] = []
+ completion_messages[-1]["tool_calls"].extend(openai_tool_calls)
+ else:
+ # Create new assistant message with tool calls
+ completion_messages.append({
+ "role": "assistant",
+ "content": None,
+ "tool_calls": openai_tool_calls
+ })
+
+ elif msg_type == "function_call_output":
+ call_id = message.get("call_id", "call_1")
+ fn_output = message.get("output", "")
+ fn_name = call_id_to_fn_name.get(call_id, "computer")
+
+ completion_messages.append({
+ "role": "function",
+ "name": fn_name,
+ "tool_call_id": call_id,
+ "content": str(fn_output)
+ })
+
elif msg_type == "computer_call":
# Computer call becomes tool use in assistant message
action = message.get("action", {})
@@ -519,6 +568,26 @@ def _convert_responses_items_to_completion_messages(messages: Messages) -> List[
"action": "screenshot"
}
})
+ elif action_type == "left_mouse_down":
+ tool_use_content.append({
+ "type": "tool_use",
+ "id": call_id,
+ "name": "computer",
+ "input": {
+ "action": "left_mouse_down",
+ "coordinate": [action.get("x", None), action.get("y", None)]
+ }
+ })
+ elif action_type == "left_mouse_up":
+ tool_use_content.append({
+ "type": "tool_use",
+ "id": call_id,
+ "name": "computer",
+ "input": {
+ "action": "left_mouse_up",
+ "coordinate": [action.get("x", None), action.get("y", None)]
+ }
+ })
# Convert tool_use_content to OpenAI tool_calls format
openai_tool_calls = []
@@ -603,45 +672,350 @@ def _convert_completion_to_responses_items(response: Any) -> List[Dict[str, Any]
# Action reference:
# https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/computer-use-tool#available-actions
+ try:
+ # Basic actions (all versions)
+ if action_type == "screenshot":
+ responses_items.append(make_screenshot_item(call_id=call_id))
+ elif action_type in ["click", "left_click"]:
+ coordinate = tool_input.get("coordinate", [0, 0])
+ responses_items.append(make_click_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ call_id=call_id
+ ))
+ elif action_type in ["type", "type_text"]:
+ responses_items.append(make_type_item(
+ text=tool_input.get("text", ""),
+ call_id=call_id
+ ))
+ elif action_type in ["key", "keypress", "hotkey"]:
+ responses_items.append(make_keypress_item(
+ keys=tool_input.get("text", "").replace("+", "-").split("-"),
+ call_id=call_id
+ ))
+ elif action_type in ["mouse_move", "move_cursor", "move"]:
+ # Mouse move - create a custom action item
+ coordinate = tool_input.get("coordinate", [0, 0])
+ responses_items.append(
+ make_move_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ call_id=call_id
+ )
+ )
+
+ # Enhanced actions (computer_20250124) Available in Claude 4 and Claude Sonnet 3.7
+ elif action_type == "scroll":
+ coordinate = tool_input.get("coordinate", [0, 0])
+ scroll_amount = tool_input.get("scroll_amount", 3)
+ scroll_x = scroll_amount if tool_input.get("scroll_direction", "down") == "right" else \
+ -scroll_amount if tool_input.get("scroll_direction", "down") == "left" else 0
+ scroll_y = scroll_amount if tool_input.get("scroll_direction", "down") == "down" else \
+ -scroll_amount if tool_input.get("scroll_direction", "down") == "up" else 0
+ responses_items.append(make_scroll_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ scroll_x=scroll_x,
+ scroll_y=scroll_y,
+ call_id=call_id
+ ))
+ elif action_type in ["left_click_drag", "drag"]:
+ start_coord = tool_input.get("start_coordinate", [0, 0])
+ end_coord = tool_input.get("end_coordinate", [0, 0])
+ responses_items.append(make_drag_item(
+ path=[
+ {
+ "x": start_coord[0] if len(start_coord) > 0 else 0,
+ "y": start_coord[1] if len(start_coord) > 1 else 0
+ },
+ {
+ "x": end_coord[0] if len(end_coord) > 0 else 0,
+ "y": end_coord[1] if len(end_coord) > 1 else 0
+ }
+ ],
+ call_id=call_id
+ ))
+ elif action_type == "right_click":
+ coordinate = tool_input.get("coordinate", [0, 0])
+ responses_items.append(make_click_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ button="right",
+ call_id=call_id
+ ))
+ elif action_type == "middle_click":
+ coordinate = tool_input.get("coordinate", [0, 0])
+ responses_items.append(make_click_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ button="wheel",
+ call_id=call_id
+ ))
+ elif action_type == "double_click":
+ coordinate = tool_input.get("coordinate", [0, 0])
+ responses_items.append(make_double_click_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ call_id=call_id
+ ))
+ elif action_type == "triple_click":
+ # coordinate = tool_input.get("coordinate", [0, 0])
+ # responses_items.append({
+ # "type": "computer_call",
+ # "call_id": call_id,
+ # "action": {
+ # "type": "triple_click",
+ # "x": coordinate[0] if len(coordinate) > 0 else 0,
+ # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # }
+ # })
+ raise NotImplementedError("triple_click")
+ elif action_type == "left_mouse_down":
+ # coordinate = tool_input.get("coordinate", [0, 0])
+ # responses_items.append({
+ # "type": "computer_call",
+ # "call_id": call_id,
+ # "action": {
+ # "type": "mouse_down",
+ # "button": "left",
+ # "x": coordinate[0] if len(coordinate) > 0 else 0,
+ # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # }
+ # })
+ coordinate = tool_input.get("coordinate", [None, None])
+ responses_items.append(make_left_mouse_down_item(
+ x=coordinate[0] if len(coordinate) > 0 else None,
+ y=coordinate[1] if len(coordinate) > 1 else None,
+ call_id=call_id
+ ))
+ elif action_type == "left_mouse_up":
+ # coordinate = tool_input.get("coordinate", [0, 0])
+ # responses_items.append({
+ # "type": "computer_call",
+ # "call_id": call_id,
+ # "action": {
+ # "type": "mouse_up",
+ # "button": "left",
+ # "x": coordinate[0] if len(coordinate) > 0 else 0,
+ # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # }
+ # })
+ coordinate = tool_input.get("coordinate", [None, None])
+ responses_items.append(make_left_mouse_up_item(
+ x=coordinate[0] if len(coordinate) > 0 else None,
+ y=coordinate[1] if len(coordinate) > 1 else None,
+ call_id=call_id
+ ))
+ elif action_type == "hold_key":
+ # responses_items.append({
+ # "type": "computer_call",
+ # "call_id": call_id,
+ # "action": {
+ # "type": "key_hold",
+ # "key": tool_input.get("key", "")
+ # }
+ # })
+ raise NotImplementedError("hold_key")
+ elif action_type == "wait":
+ responses_items.append(make_wait_item(
+ call_id=call_id
+ ))
+ else:
+ raise ValueError(f"Unknown action type: {action_type}")
+ except Exception as e:
+ responses_items.extend(make_failed_tool_call_items(
+ tool_name="computer",
+ tool_kwargs=tool_input,
+ error_message=repr(e),
+ call_id=call_id
+ ))
+
+ # Handle tool calls (alternative format)
+ if hasattr(message, 'tool_calls') and message.tool_calls:
+ for tool_call in message.tool_calls:
+ if tool_call.function.name == "computer":
+ try:
+ try:
+ args = json.loads(tool_call.function.arguments)
+ action_type = args.get("action")
+ call_id = tool_call.id
+
# Basic actions (all versions)
if action_type == "screenshot":
- responses_items.append(make_screenshot_item(call_id=call_id))
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "screenshot"
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "screenshot"
+ # }
+ # }
+ responses_items.append(make_screenshot_item(
+ call_id=call_id
+ ))
elif action_type in ["click", "left_click"]:
- coordinate = tool_input.get("coordinate", [0, 0])
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "click",
+ # "coordinate": [100, 200]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "click",
+ # "x": 100,
+ # "y": 200
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
responses_items.append(make_click_item(
x=coordinate[0] if len(coordinate) > 0 else 0,
y=coordinate[1] if len(coordinate) > 1 else 0,
call_id=call_id
))
elif action_type in ["type", "type_text"]:
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "type",
+ # "text": "Hello World"
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "type",
+ # "text": "Hello World"
+ # }
+ # }
responses_items.append(make_type_item(
- text=tool_input.get("text", ""),
+ text=args.get("text", ""),
call_id=call_id
))
elif action_type in ["key", "keypress", "hotkey"]:
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "key",
+ # "text": "ctrl+c"
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "keypress",
+ # "keys": ["ctrl", "c"]
+ # }
+ # }
responses_items.append(make_keypress_item(
- keys=tool_input.get("text", "").replace("+", "-").split("-"),
+ keys=args.get("text", "").replace("+", "-").split("-"),
call_id=call_id
))
elif action_type in ["mouse_move", "move_cursor", "move"]:
- # Mouse move - create a custom action item
- coordinate = tool_input.get("coordinate", [0, 0])
- responses_items.append(
- make_move_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- call_id=call_id
- )
- )
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "mouse_move",
+ # "coordinate": [150, 250]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "mouse_move",
+ # "x": 150,
+ # "y": 250
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
+ responses_items.append(make_move_item(
+ x=coordinate[0] if len(coordinate) > 0 else 0,
+ y=coordinate[1] if len(coordinate) > 1 else 0,
+ call_id=call_id
+ ))
# Enhanced actions (computer_20250124) Available in Claude 4 and Claude Sonnet 3.7
elif action_type == "scroll":
- coordinate = tool_input.get("coordinate", [0, 0])
- scroll_amount = tool_input.get("scroll_amount", 3)
- scroll_x = scroll_amount if tool_input.get("scroll_direction", "down") == "right" else \
- -scroll_amount if tool_input.get("scroll_direction", "down") == "left" else 0
- scroll_y = scroll_amount if tool_input.get("scroll_direction", "down") == "down" else \
- -scroll_amount if tool_input.get("scroll_direction", "down") == "up" else 0
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "scroll",
+ # "coordinate": [300, 400],
+ # "scroll_direction": "down",
+ # "scroll_amount": 5
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "scroll",
+ # "x": 300,
+ # "y": 400,
+ # "scroll_x": 0,
+ # "scroll_y": -5
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
+ direction = args.get("scroll_direction", "down")
+ amount = args.get("scroll_amount", 3)
+ scroll_x = amount if direction == "left" else \
+ -amount if direction == "right" else 0
+ scroll_y = amount if direction == "up" else \
+ -amount if direction == "down" else 0
responses_items.append(make_scroll_item(
x=coordinate[0] if len(coordinate) > 0 else 0,
y=coordinate[1] if len(coordinate) > 1 else 0,
@@ -650,8 +1024,34 @@ def _convert_completion_to_responses_items(response: Any) -> List[Dict[str, Any]
call_id=call_id
))
elif action_type in ["left_click_drag", "drag"]:
- start_coord = tool_input.get("start_coordinate", [0, 0])
- end_coord = tool_input.get("end_coordinate", [0, 0])
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "left_click_drag",
+ # "start_coordinate": [100, 150],
+ # "end_coordinate": [200, 250]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "drag",
+ # "path": [
+ # {"x": 100, "y": 150},
+ # {"x": 200, "y": 250}
+ # ]
+ # }
+ # }
+ start_coord = args.get("start_coordinate", [0, 0])
+ end_coord = args.get("end_coordinate", [0, 0])
responses_items.append(make_drag_item(
path=[
{
@@ -666,7 +1066,31 @@ def _convert_completion_to_responses_items(response: Any) -> List[Dict[str, Any]
call_id=call_id
))
elif action_type == "right_click":
- coordinate = tool_input.get("coordinate", [0, 0])
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "right_click",
+ # "coordinate": [120, 180]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "click",
+ # "x": 120,
+ # "y": 180,
+ # "button": "right"
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
responses_items.append(make_click_item(
x=coordinate[0] if len(coordinate) > 0 else 0,
y=coordinate[1] if len(coordinate) > 1 else 0,
@@ -674,7 +1098,31 @@ def _convert_completion_to_responses_items(response: Any) -> List[Dict[str, Any]
call_id=call_id
))
elif action_type == "middle_click":
- coordinate = tool_input.get("coordinate", [0, 0])
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "middle_click",
+ # "coordinate": [140, 220]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "click",
+ # "x": 140,
+ # "y": 220,
+ # "button": "wheel"
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
responses_items.append(make_click_item(
x=coordinate[0] if len(coordinate) > 0 else 0,
y=coordinate[1] if len(coordinate) > 1 else 0,
@@ -682,518 +1130,175 @@ def _convert_completion_to_responses_items(response: Any) -> List[Dict[str, Any]
call_id=call_id
))
elif action_type == "double_click":
- coordinate = tool_input.get("coordinate", [0, 0])
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "double_click",
+ # "coordinate": [160, 240]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "double_click",
+ # "x": 160,
+ # "y": 240
+ # }
+ # }
+ coordinate = args.get("coordinate", [0, 0])
responses_items.append(make_double_click_item(
x=coordinate[0] if len(coordinate) > 0 else 0,
y=coordinate[1] if len(coordinate) > 1 else 0,
call_id=call_id
))
elif action_type == "triple_click":
- # coordinate = tool_input.get("coordinate", [0, 0])
- # responses_items.append({
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "triple_click",
+ # "coordinate": [180, 260]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
# "type": "computer_call",
- # "call_id": call_id,
+ # "call_id": "call_1",
# "action": {
# "type": "triple_click",
- # "x": coordinate[0] if len(coordinate) > 0 else 0,
- # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # "x": 180,
+ # "y": 260
# }
- # })
+ # }
raise NotImplementedError("triple_click")
elif action_type == "left_mouse_down":
- # coordinate = tool_input.get("coordinate", [0, 0])
- # responses_items.append({
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "left_mouse_down",
+ # "coordinate": [200, 280]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
# "type": "computer_call",
- # "call_id": call_id,
+ # "call_id": "call_1",
# "action": {
# "type": "mouse_down",
# "button": "left",
- # "x": coordinate[0] if len(coordinate) > 0 else 0,
- # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # "x": 200,
+ # "y": 280
# }
- # })
- raise NotImplementedError("left_mouse_down")
+ # }
+ coordinate = args.get("coordinate", [None, None])
+ responses_items.append(make_left_mouse_down_item(
+ x=coordinate[0] if len(coordinate) > 0 else None,
+ y=coordinate[1] if len(coordinate) > 1 else None,
+ call_id=call_id
+ ))
elif action_type == "left_mouse_up":
- # coordinate = tool_input.get("coordinate", [0, 0])
- # responses_items.append({
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "left_mouse_up",
+ # "coordinate": [220, 300]
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
# "type": "computer_call",
- # "call_id": call_id,
+ # "call_id": "call_1",
# "action": {
# "type": "mouse_up",
# "button": "left",
- # "x": coordinate[0] if len(coordinate) > 0 else 0,
- # "y": coordinate[1] if len(coordinate) > 1 else 0
+ # "x": 220,
+ # "y": 300
# }
- # })
- raise NotImplementedError("left_mouse_up")
+ # }
+ coordinate = args.get("coordinate", [None, None])
+ responses_items.append(make_left_mouse_up_item(
+ x=coordinate[0] if len(coordinate) > 0 else None,
+ y=coordinate[1] if len(coordinate) > 1 else None,
+ call_id=call_id
+ ))
elif action_type == "hold_key":
- # responses_items.append({
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "hold_key",
+ # "key": "shift"
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
# "type": "computer_call",
- # "call_id": call_id,
+ # "call_id": "call_1",
# "action": {
# "type": "key_hold",
- # "key": tool_input.get("key", "")
+ # "key": "shift"
# }
- # })
+ # }
raise NotImplementedError("hold_key")
elif action_type == "wait":
+ # Input:
+ # {
+ # "function": {
+ # "name": "computer",
+ # "arguments": json.dumps({
+ # "action": "wait"
+ # })
+ # },
+ # "id": "call_1",
+ # "type": "function"
+ # }
+
+ # Output:
+ # {
+ # "type": "computer_call",
+ # "call_id": "call_1",
+ # "action": {
+ # "type": "wait"
+ # }
+ # }
responses_items.append(make_wait_item(
call_id=call_id
))
- else:
- raise ValueError(f"Unknown action type: {action_type}")
-
- # Handle tool calls (alternative format)
- if hasattr(message, 'tool_calls') and message.tool_calls:
- for tool_call in message.tool_calls:
- if tool_call.function.name == "computer":
- try:
- args = json.loads(tool_call.function.arguments)
- action_type = args.get("action")
- call_id = tool_call.id
-
- # Basic actions (all versions)
- if action_type == "screenshot":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "screenshot"
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "screenshot"
- # }
- # }
- responses_items.append(make_screenshot_item(
- call_id=call_id
- ))
- elif action_type in ["click", "left_click"]:
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "click",
- # "coordinate": [100, 200]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "click",
- # "x": 100,
- # "y": 200
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- responses_items.append(make_click_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- call_id=call_id
- ))
- elif action_type in ["type", "type_text"]:
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "type",
- # "text": "Hello World"
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "type",
- # "text": "Hello World"
- # }
- # }
- responses_items.append(make_type_item(
- text=args.get("text", ""),
- call_id=call_id
- ))
- elif action_type in ["key", "keypress", "hotkey"]:
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "key",
- # "text": "ctrl+c"
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "keypress",
- # "keys": ["ctrl", "c"]
- # }
- # }
- responses_items.append(make_keypress_item(
- keys=args.get("text", "").replace("+", "-").split("-"),
- call_id=call_id
- ))
- elif action_type in ["mouse_move", "move_cursor", "move"]:
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "mouse_move",
- # "coordinate": [150, 250]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "mouse_move",
- # "x": 150,
- # "y": 250
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- responses_items.append(make_move_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- call_id=call_id
- ))
-
- # Enhanced actions (computer_20250124) Available in Claude 4 and Claude Sonnet 3.7
- elif action_type == "scroll":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "scroll",
- # "coordinate": [300, 400],
- # "scroll_direction": "down",
- # "scroll_amount": 5
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "scroll",
- # "x": 300,
- # "y": 400,
- # "scroll_x": 0,
- # "scroll_y": -5
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- direction = args.get("scroll_direction", "down")
- amount = args.get("scroll_amount", 3)
- scroll_x = amount if direction == "left" else \
- -amount if direction == "right" else 0
- scroll_y = amount if direction == "up" else \
- -amount if direction == "down" else 0
- responses_items.append(make_scroll_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- scroll_x=scroll_x,
- scroll_y=scroll_y,
- call_id=call_id
- ))
- elif action_type in ["left_click_drag", "drag"]:
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "left_click_drag",
- # "start_coordinate": [100, 150],
- # "end_coordinate": [200, 250]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "drag",
- # "path": [
- # {"x": 100, "y": 150},
- # {"x": 200, "y": 250}
- # ]
- # }
- # }
- start_coord = args.get("start_coordinate", [0, 0])
- end_coord = args.get("end_coordinate", [0, 0])
- responses_items.append(make_drag_item(
- path=[
- {
- "x": start_coord[0] if len(start_coord) > 0 else 0,
- "y": start_coord[1] if len(start_coord) > 1 else 0
- },
- {
- "x": end_coord[0] if len(end_coord) > 0 else 0,
- "y": end_coord[1] if len(end_coord) > 1 else 0
- }
- ],
- call_id=call_id
- ))
- elif action_type == "right_click":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "right_click",
- # "coordinate": [120, 180]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "click",
- # "x": 120,
- # "y": 180,
- # "button": "right"
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- responses_items.append(make_click_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- button="right",
- call_id=call_id
- ))
- elif action_type == "middle_click":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "middle_click",
- # "coordinate": [140, 220]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "click",
- # "x": 140,
- # "y": 220,
- # "button": "wheel"
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- responses_items.append(make_click_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- button="wheel",
- call_id=call_id
- ))
- elif action_type == "double_click":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "double_click",
- # "coordinate": [160, 240]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "double_click",
- # "x": 160,
- # "y": 240
- # }
- # }
- coordinate = args.get("coordinate", [0, 0])
- responses_items.append(make_double_click_item(
- x=coordinate[0] if len(coordinate) > 0 else 0,
- y=coordinate[1] if len(coordinate) > 1 else 0,
- call_id=call_id
- ))
- elif action_type == "triple_click":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "triple_click",
- # "coordinate": [180, 260]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "triple_click",
- # "x": 180,
- # "y": 260
- # }
- # }
- raise NotImplementedError("triple_click")
- elif action_type == "left_mouse_down":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "left_mouse_down",
- # "coordinate": [200, 280]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "mouse_down",
- # "button": "left",
- # "x": 200,
- # "y": 280
- # }
- # }
- raise NotImplementedError("left_mouse_down")
- elif action_type == "left_mouse_up":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "left_mouse_up",
- # "coordinate": [220, 300]
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "mouse_up",
- # "button": "left",
- # "x": 220,
- # "y": 300
- # }
- # }
- raise NotImplementedError("left_mouse_up")
- elif action_type == "hold_key":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "hold_key",
- # "key": "shift"
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "key_hold",
- # "key": "shift"
- # }
- # }
- raise NotImplementedError("hold_key")
- elif action_type == "wait":
- # Input:
- # {
- # "function": {
- # "name": "computer",
- # "arguments": json.dumps({
- # "action": "wait"
- # })
- # },
- # "id": "call_1",
- # "type": "function"
- # }
-
- # Output:
- # {
- # "type": "computer_call",
- # "call_id": "call_1",
- # "action": {
- # "type": "wait"
- # }
- # }
- responses_items.append(make_wait_item(
+ except Exception as e:
+ responses_items.extend(make_failed_tool_call_items(
+ tool_name="computer",
+ tool_kwargs=args,
+ error_message=repr(e),
call_id=call_id
))
except json.JSONDecodeError:
@@ -1284,84 +1389,192 @@ def _merge_consecutive_text(content_list: List[Dict[str, Any]]) -> List[Dict[str
return merged
-@agent_loop(models=r".*claude-.*", priority=5)
-async def anthropic_hosted_tools_loop(
- messages: Messages,
- model: str,
- tools: Optional[List[Dict[str, Any]]] = None,
- max_retries: Optional[int] = None,
- stream: bool = False,
- computer_handler=None,
- use_prompt_caching: Optional[bool] = False,
- _on_api_start=None,
- _on_api_end=None,
- _on_usage=None,
- _on_screenshot=None,
- **kwargs
-) -> Union[AgentResponse, AsyncGenerator[Dict[str, Any], None]]:
- """
- Anthropic hosted tools agent loop using liteLLM acompletion.
+@register_agent(models=r".*claude-.*")
+class AnthropicHostedToolsConfig(AsyncAgentConfig):
+ """Anthropic hosted tools agent configuration implementing AsyncAgentConfig protocol."""
- Supports Anthropic's computer use models with hosted tools.
- """
- tools = tools or []
-
- # Get tool configuration for this model
- tool_config = _get_tool_config_for_model(model)
-
- # Prepare tools for Anthropic API
- anthropic_tools = _prepare_tools_for_anthropic(tools, model)
-
- # Convert responses_items messages to completion format
- completion_messages = _convert_responses_items_to_completion_messages(messages)
- if use_prompt_caching:
- # First combine messages to reduce number of blocks
- completion_messages = _combine_completion_messages(completion_messages)
- # Then add cache control, anthropic requires explicit "cache_control" dicts
- completion_messages = _add_cache_control(completion_messages)
-
- # Prepare API call kwargs
- api_kwargs = {
- "model": model,
- "messages": completion_messages,
- "tools": anthropic_tools if anthropic_tools else None,
- "stream": stream,
- "num_retries": max_retries,
+ async def predict_step(
+ self,
+ messages: Messages,
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
**kwargs
- }
-
- # Add beta header for computer use
- if anthropic_tools:
- api_kwargs["headers"] = {
- "anthropic-beta": tool_config["beta_flag"]
+ ) -> Dict[str, Any]:
+ """
+ Anthropic hosted tools agent loop using liteLLM acompletion.
+
+ Supports Anthropic's computer use models with hosted tools.
+ """
+ tools = tools or []
+
+ # Get tool configuration for this model
+ tool_config = _get_tool_config_for_model(model)
+
+ # Prepare tools for Anthropic API
+ anthropic_tools = await _prepare_tools_for_anthropic(tools, model)
+
+ # Convert responses_items messages to completion format
+ completion_messages = _convert_responses_items_to_completion_messages(messages)
+ if use_prompt_caching:
+ # First combine messages to reduce number of blocks
+ completion_messages = _combine_completion_messages(completion_messages)
+ # Then add cache control, anthropic requires explicit "cache_control" dicts
+ completion_messages = _add_cache_control(completion_messages)
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": completion_messages,
+ "tools": anthropic_tools if anthropic_tools else None,
+ "stream": stream,
+ "num_retries": max_retries,
+ **kwargs
+ }
+
+ # Add beta header for computer use
+ if anthropic_tools:
+ api_kwargs["headers"] = {
+ "anthropic-beta": tool_config["beta_flag"]
+ }
+
+ # Call API start hook
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ # Use liteLLM acompletion
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Call API end hook
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Convert response to responses_items format
+ responses_items = _convert_completion_to_responses_items(response)
+
+ # Extract usage information
+ responses_usage = {
+ **LiteLLMCompletionResponsesConfig._transform_chat_completion_usage_to_responses_usage(response.usage).model_dump(),
+ "response_cost": response._hidden_params.get("response_cost", 0.0),
+ }
+ if _on_usage:
+ await _on_usage(responses_usage)
+
+ # Return in AsyncAgentConfig format
+ return {
+ "output": responses_items,
+ "usage": responses_usage
}
- # Call API start hook
- if _on_api_start:
- await _on_api_start(api_kwargs)
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str,
+ **kwargs
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates based on image and instruction.
+
+ Uses Anthropic's computer use models with a custom prompt that instructs
+ the agent to only output clicks.
+
+ Args:
+ model: Model name to use
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ # Get image dimensions from base64 data
+ try:
+ import base64
+ from PIL import Image
+ from io import BytesIO
+
+ image_data = base64.b64decode(image_b64)
+ image = Image.open(BytesIO(image_data))
+ display_width, display_height = image.size
+ except Exception:
+ # Fallback to default dimensions if image parsing fails
+ display_width, display_height = 1024, 768
+
+ # Get tool configuration for this model
+ tool_config = _get_tool_config_for_model(model)
+
+ # Prepare computer tool for Anthropic format
+ computer_tool = {
+ "type": tool_config["tool_version"],
+ "function": {
+ "name": "computer",
+ "parameters": {
+ "display_height_px": display_height,
+ "display_width_px": display_width,
+ "display_number": 1,
+ },
+ },
+ }
+
+ # Construct messages in OpenAI chat completion format for liteLLM
+ messages = [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": f"You are a UI grounding expert. Look at the image and {instruction}. Output ONLY a click action on the target element. No explanations, confirmations, or additional text."
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/png;base64,{image_b64}"
+ }
+ }
+ ]
+ }
+ ]
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": messages,
+ "tools": [computer_tool],
+ "stream": False,
+ "max_tokens": 100, # Keep response short for click prediction
+ "headers": {
+ "anthropic-beta": tool_config["beta_flag"]
+ }
+ }
- # Use liteLLM acompletion
- response = await litellm.acompletion(**api_kwargs)
+ # Use liteLLM acompletion
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Convert response to responses_items format to extract click coordinates
+ responses_items = _convert_completion_to_responses_items(response)
+
+ # Look for computer_call with click action
+ for item in responses_items:
+ if (isinstance(item, dict) and
+ item.get("type") == "computer_call" and
+ isinstance(item.get("action"), dict)):
+
+ action = item["action"]
+ if action.get("type") == "click":
+ x = action.get("x")
+ y = action.get("y")
+ if x is not None and y is not None:
+ return (int(x), int(y))
+
+ return None
- # Call API end hook
- if _on_api_end:
- await _on_api_end(api_kwargs, response)
-
- # Convert response to responses_items format
- responses_items = _convert_completion_to_responses_items(response)
-
- # Extract usage information
- responses_usage = {
- **LiteLLMCompletionResponsesConfig._transform_chat_completion_usage_to_responses_usage(response.usage).model_dump(),
- "response_cost": response._hidden_params.get("response_cost", 0.0),
- }
- if _on_usage:
- await _on_usage(responses_usage)
-
- # Create agent response
- agent_response = {
- "output": responses_items,
- "usage": responses_usage
- }
-
- return agent_response
+ def get_capabilities(self) -> List[AgentCapability]:
+ """Return the capabilities supported by this agent."""
+ return ["click", "step"]
diff --git a/libs/python/agent/agent/loops/base.py b/libs/python/agent/agent/loops/base.py
new file mode 100644
index 00000000..887605b1
--- /dev/null
+++ b/libs/python/agent/agent/loops/base.py
@@ -0,0 +1,76 @@
+"""
+Base protocol for async agent configurations
+"""
+
+from typing import Protocol, List, Dict, Any, Optional, Tuple, Union
+from abc import abstractmethod
+from ..types import AgentCapability
+
+class AsyncAgentConfig(Protocol):
+ """Protocol defining the interface for async agent configurations."""
+
+ @abstractmethod
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
+ **kwargs
+ ) -> Dict[str, Any]:
+ """
+ Predict the next step based on input items.
+
+ Args:
+ messages: Input items following Responses format (message, function_call, computer_call)
+ model: Model name to use
+ tools: Optional list of tool schemas
+ max_retries: Maximum number of retries for failed API calls
+ stream: Whether to stream responses
+ computer_handler: Computer handler instance
+ _on_api_start: Callback for API start
+ _on_api_end: Callback for API end
+ _on_usage: Callback for usage tracking
+ _on_screenshot: Callback for screenshot events
+ **kwargs: Additional arguments
+
+ Returns:
+ Dictionary with "output" (output items) and "usage" array
+ """
+ ...
+
+ @abstractmethod
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates based on image and instruction.
+
+ Args:
+ model: Model name to use
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ None or tuple with (x, y) coordinates
+ """
+ ...
+
+ @abstractmethod
+ def get_capabilities(self) -> List[AgentCapability]:
+ """
+ Get list of capabilities supported by this agent config.
+
+ Returns:
+ List of capability strings (e.g., ["step", "click"])
+ """
+ ...
diff --git a/libs/python/agent/agent/loops/composed_grounded.py b/libs/python/agent/agent/loops/composed_grounded.py
new file mode 100644
index 00000000..cf029d13
--- /dev/null
+++ b/libs/python/agent/agent/loops/composed_grounded.py
@@ -0,0 +1,318 @@
+"""
+Composed-grounded agent loop implementation that combines grounding and thinking models.
+Uses a two-stage approach: grounding model for element detection, thinking model for reasoning.
+"""
+
+import uuid
+import asyncio
+import json
+import base64
+from typing import Dict, List, Any, Optional, Tuple
+from io import BytesIO
+from PIL import Image
+import litellm
+
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
+from ..loops.base import AsyncAgentConfig
+from ..responses import (
+ convert_computer_calls_xy2desc,
+ convert_responses_items_to_completion_messages,
+ convert_completion_messages_to_responses_items,
+ convert_computer_calls_desc2xy,
+ get_all_element_descriptions
+)
+from ..agent import find_agent_config
+
+GROUNDED_COMPUTER_TOOL_SCHEMA = {
+ "type": "function",
+ "function": {
+ "name": "computer",
+ "description": "Control a computer by taking screenshots and interacting with UI elements. This tool uses element descriptions to locate and interact with UI elements on the screen (e.g., 'red submit button', 'search text field', 'hamburger menu icon', 'close button in top right corner').",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "action": {
+ "type": "string",
+ "enum": [
+ "screenshot",
+ "click",
+ "double_click",
+ "drag",
+ "type",
+ "keypress",
+ "scroll",
+ "move",
+ "wait",
+ "get_current_url",
+ "get_dimensions",
+ "get_environment"
+ ],
+ "description": "The action to perform"
+ },
+ "element_description": {
+ "type": "string",
+ "description": "Description of the element to interact with (required for click, double_click, move, scroll actions, and as start/end for drag)"
+ },
+ "start_element_description": {
+ "type": "string",
+ "description": "Description of the element to start dragging from (required for drag action)"
+ },
+ "end_element_description": {
+ "type": "string",
+ "description": "Description of the element to drag to (required for drag action)"
+ },
+ "text": {
+ "type": "string",
+ "description": "The text to type (required for type action)"
+ },
+ "keys": {
+ "type": "string",
+ "description": "Key combination to press (required for keypress action). Single key for individual key press, multiple keys for combinations (e.g., 'ctrl+c')"
+ },
+ "button": {
+ "type": "string",
+ "description": "The mouse button to use for click action (left, right, wheel, back, forward) Default: left",
+ },
+ "scroll_x": {
+ "type": "integer",
+ "description": "Horizontal scroll amount for scroll action (positive for right, negative for left)",
+ },
+ "scroll_y": {
+ "type": "integer",
+ "description": "Vertical scroll amount for scroll action (positive for down, negative for up)",
+ },
+ },
+ "required": [
+ "action"
+ ]
+ }
+ }
+}
+
+def _prepare_tools_for_grounded(tool_schemas: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
+ """Prepare tools for grounded API format"""
+ grounded_tools = []
+
+ for schema in tool_schemas:
+ if schema["type"] == "computer":
+ grounded_tools.append(GROUNDED_COMPUTER_TOOL_SCHEMA)
+ else:
+ grounded_tools.append(schema)
+
+ return grounded_tools
+
+def get_last_computer_call_image(messages: List[Dict[str, Any]]) -> Optional[str]:
+ """Get the last computer call output image from messages."""
+ for message in reversed(messages):
+ if (isinstance(message, dict) and
+ message.get("type") == "computer_call_output" and
+ isinstance(message.get("output"), dict) and
+ message["output"].get("type") == "input_image"):
+ image_url = message["output"].get("image_url", "")
+ if image_url.startswith("data:image/png;base64,"):
+ return image_url.split(",", 1)[1]
+ return None
+
+
+@register_agent(r".*\+.*", priority=1)
+class ComposedGroundedConfig:
+ """
+ Composed-grounded agent configuration that uses both grounding and thinking models.
+
+ The model parameter should be in format: "grounding_model+thinking_model"
+ e.g., "huggingface-local/HelloKKMe/GTA1-7B+gemini/gemini-1.5-pro"
+ """
+
+ def __init__(self):
+ self.desc2xy: Dict[str, Tuple[float, float]] = {}
+
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
+ **kwargs
+ ) -> Dict[str, Any]:
+ """
+ Composed-grounded predict step implementation.
+
+ Process:
+ 0. Store last computer call image, if none then take a screenshot
+ 1. Convert computer calls from xy to descriptions
+ 2. Convert responses items to completion messages
+ 3. Call thinking model with litellm.acompletion
+ 4. Convert completion messages to responses items
+ 5. Get all element descriptions and populate desc2xy mapping
+ 6. Convert computer calls from descriptions back to xy coordinates
+ 7. Return output and usage
+ """
+ # Parse the composed model
+ if "+" not in model:
+ raise ValueError(f"Composed model must be in format 'grounding_model+thinking_model', got: {model}")
+ grounding_model, thinking_model = model.split("+", 1)
+
+ pre_output_items = []
+
+ # Step 0: Store last computer call image, if none then take a screenshot
+ last_image_b64 = get_last_computer_call_image(messages)
+ if last_image_b64 is None:
+ # Take a screenshot
+ screenshot_b64 = await computer_handler.screenshot() # type: ignore
+ if screenshot_b64:
+
+ call_id = uuid.uuid4().hex
+ pre_output_items += [
+ {
+ "type": "message",
+ "role": "assistant",
+ "content": [
+ {
+ "type": "output_text",
+ "text": "Taking a screenshot to see the current computer screen."
+ }
+ ]
+ },
+ {
+ "action": {
+ "type": "screenshot"
+ },
+ "call_id": call_id,
+ "status": "completed",
+ "type": "computer_call"
+ },
+ {
+ "type": "computer_call_output",
+ "call_id": call_id,
+ "output": {
+ "type": "input_image",
+ "image_url": f"data:image/png;base64,{screenshot_b64}"
+ }
+ },
+ ]
+ last_image_b64 = screenshot_b64
+
+ # Call screenshot callback if provided
+ if _on_screenshot:
+ await _on_screenshot(screenshot_b64)
+
+ tool_schemas = _prepare_tools_for_grounded(tools) # type: ignore
+
+ # Step 1: Convert computer calls from xy to descriptions
+ input_messages = messages + pre_output_items
+ messages_with_descriptions = convert_computer_calls_xy2desc(input_messages, self.desc2xy)
+
+ # Step 2: Convert responses items to completion messages
+ completion_messages = convert_responses_items_to_completion_messages(
+ messages_with_descriptions,
+ allow_images_in_tool_results=False
+ )
+
+ # Step 3: Call thinking model with litellm.acompletion
+ api_kwargs = {
+ "model": thinking_model,
+ "messages": completion_messages,
+ "tools": tool_schemas,
+ "max_retries": max_retries,
+ "stream": stream,
+ **kwargs
+ }
+
+ if use_prompt_caching:
+ api_kwargs["use_prompt_caching"] = use_prompt_caching
+
+ # Call API start hook
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ # Make the completion call
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Call API end hook
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Extract usage information
+ usage = {
+ **response.usage.model_dump(), # type: ignore
+ "response_cost": response._hidden_params.get("response_cost", 0.0),
+ }
+ if _on_usage:
+ await _on_usage(usage)
+
+ # Step 4: Convert completion messages back to responses items format
+ response_dict = response.model_dump() # type: ignore
+ choice_messages = [choice["message"] for choice in response_dict["choices"]]
+ thinking_output_items = []
+
+ for choice_message in choice_messages:
+ thinking_output_items.extend(convert_completion_messages_to_responses_items([choice_message]))
+
+ # Step 5: Get all element descriptions and populate desc2xy mapping
+ element_descriptions = get_all_element_descriptions(thinking_output_items)
+
+ if element_descriptions and last_image_b64:
+ # Use grounding model to predict coordinates for each description
+ grounding_agent_conf = find_agent_config(grounding_model)
+ if grounding_agent_conf:
+ grounding_agent = grounding_agent_conf.agent_class()
+
+ for desc in element_descriptions:
+ coords = await grounding_agent.predict_click(
+ model=grounding_model,
+ image_b64=last_image_b64,
+ instruction=desc
+ )
+ if coords:
+ self.desc2xy[desc] = coords
+
+ # Step 6: Convert computer calls from descriptions back to xy coordinates
+ final_output_items = convert_computer_calls_desc2xy(thinking_output_items, self.desc2xy)
+
+ # Step 7: Return output and usage
+ return {
+ "output": pre_output_items + final_output_items,
+ "usage": usage
+ }
+
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str,
+ **kwargs
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates using the grounding model.
+
+ For composed models, uses only the grounding model part for click prediction.
+ """
+ # Parse the composed model to get grounding model
+ if "+" not in model:
+ raise ValueError(f"Composed model must be in format 'grounding_model+thinking_model', got: {model}")
+ grounding_model, thinking_model = model.split("+", 1)
+
+ # Find and use the grounding agent
+ grounding_agent_conf = find_agent_config(grounding_model)
+ if grounding_agent_conf:
+ grounding_agent = grounding_agent_conf.agent_class()
+ return await grounding_agent.predict_click(
+ model=grounding_model,
+ image_b64=image_b64,
+ instruction=instruction,
+ **kwargs
+ )
+
+ return None
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """Return the capabilities supported by this agent."""
+ return ["click", "step"]
diff --git a/libs/python/agent/agent/loops/glm45v.py b/libs/python/agent/agent/loops/glm45v.py
new file mode 100644
index 00000000..adc87026
--- /dev/null
+++ b/libs/python/agent/agent/loops/glm45v.py
@@ -0,0 +1,902 @@
+"""
+GLM-4.5V agent loop implementation using liteLLM for GLM-4.5V model.
+Supports vision-language models for computer control with bounding box parsing.
+"""
+
+import asyncio
+import json
+import base64
+import re
+from typing import Dict, List, Any, Optional, Tuple
+from io import BytesIO
+from PIL import Image
+import litellm
+from litellm.types.utils import ModelResponse
+from litellm.responses.litellm_completion_transformation.transformation import LiteLLMCompletionResponsesConfig
+
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
+from ..loops.base import AsyncAgentConfig
+from ..responses import (
+ convert_responses_items_to_completion_messages,
+ convert_completion_messages_to_responses_items,
+ make_reasoning_item,
+ make_output_text_item,
+ make_click_item,
+ make_double_click_item,
+ make_drag_item,
+ make_keypress_item,
+ make_scroll_item,
+ make_type_item,
+ make_wait_item,
+ make_input_image_item
+)
+
+# GLM-4.5V specific constants
+GLM_ACTION_SPACE = """
+### {left,right,middle}_click
+
+Call rule: `{left,right,middle}_click(start_box='[x,y]', element_info='')`
+{
+ 'name': ['left_click', 'right_click', 'middle_click'],
+ 'description': 'Perform a left/right/middle mouse click at the specified coordinates on the screen.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'start_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Coordinates [x,y] where to perform the click, normalized to 0-999 range.'
+ },
+ 'element_info': {
+ 'type': 'string',
+ 'description': 'Optional text description of the UI element being clicked.'
+ }
+ },
+ 'required': ['start_box']
+ }
+}
+
+### hover
+
+Call rule: `hover(start_box='[x,y]', element_info='')`
+{
+ 'name': 'hover',
+ 'description': 'Move the mouse pointer to the specified coordinates without performing any click action.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'start_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Coordinates [x,y] where to move the mouse pointer, normalized to 0-999 range.'
+ },
+ 'element_info': {
+ 'type': 'string',
+ 'description': 'Optional text description of the UI element being hovered over.'
+ }
+ },
+ 'required': ['start_box']
+ }
+}
+
+### left_double_click
+
+Call rule: `left_double_click(start_box='[x,y]', element_info='')`
+{
+ 'name': 'left_double_click',
+ 'description': 'Perform a left mouse double-click at the specified coordinates on the screen.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'start_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Coordinates [x,y] where to perform the double-click, normalized to 0-999 range.'
+ },
+ 'element_info': {
+ 'type': 'string',
+ 'description': 'Optional text description of the UI element being double-clicked.'
+ }
+ },
+ 'required': ['start_box']
+ }
+}
+
+### left_drag
+
+Call rule: `left_drag(start_box='[x1,y1]', end_box='[x2,y2]', element_info='')`
+{
+ 'name': 'left_drag',
+ 'description': 'Drag the mouse from starting coordinates to ending coordinates while holding the left mouse button.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'start_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Starting coordinates [x1,y1] for the drag operation, normalized to 0-999 range.'
+ },
+ 'end_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Ending coordinates [x2,y2] for the drag operation, normalized to 0-999 range.'
+ },
+ 'element_info': {
+ 'type': 'string',
+ 'description': 'Optional text description of the UI element being dragged.'
+ }
+ },
+ 'required': ['start_box', 'end_box']
+ }
+}
+
+### key
+
+Call rule: `key(keys='')`
+{
+ 'name': 'key',
+ 'description': 'Simulate pressing a single key or combination of keys on the keyboard.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'keys': {
+ 'type': 'string',
+ 'description': 'The key or key combination to press. Use '+' to separate keys in combinations (e.g., 'ctrl+c', 'alt+tab').'
+ }
+ },
+ 'required': ['keys']
+ }
+}
+
+### type
+
+Call rule: `type(content='')`
+{
+ 'name': 'type',
+ 'description': 'Type text content into the currently focused text input field. This action only performs typing and does not handle field activation or clearing.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'content': {
+ 'type': 'string',
+ 'description': 'The text content to be typed into the active text field.'
+ }
+ },
+ 'required': ['content']
+ }
+}
+
+### scroll
+
+Call rule: `scroll(start_box='[x,y]', direction='', step=5, element_info='')`
+{
+ 'name': 'scroll',
+ 'description': 'Scroll an element at the specified coordinates in the specified direction by a given number of wheel steps.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {
+ 'start_box': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'integer'
+ },
+ 'description': 'Coordinates [x,y] of the element or area to scroll, normalized to 0-999 range.'
+ },
+ 'direction': {
+ 'type': 'string',
+ 'enum': ['down', 'up'],
+ 'description': 'The direction to scroll: 'down' or 'up'.'
+ },
+ 'step': {
+ 'type': 'integer',
+ 'default': 5,
+ 'description': 'Number of wheel steps to scroll, default is 5.'
+ },
+ 'element_info': {
+ 'type': 'string',
+ 'description': 'Optional text description of the UI element being scrolled.'
+ }
+ },
+ 'required': ['start_box', 'direction']
+ }
+}
+
+### WAIT
+
+Call rule: `WAIT()`
+{
+ 'name': 'WAIT',
+ 'description': 'Wait for 5 seconds before proceeding to the next action.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {},
+ 'required': []
+ }
+}
+
+### DONE
+
+Call rule: `DONE()`
+{
+ 'name': 'DONE',
+ 'description': 'Indicate that the current task has been completed successfully and no further actions are needed.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {},
+ 'required': []
+ }
+}
+
+### FAIL
+
+Call rule: `FAIL()`
+{
+ 'name': 'FAIL',
+ 'description': 'Indicate that the current task cannot be completed or is impossible to accomplish.',
+ 'parameters': {
+ 'type': 'object',
+ 'properties': {},
+ 'required': []
+ }
+}"""
+
+def encode_image_to_base64(image_path: str) -> str:
+ """Encode image file to base64 string with data URI."""
+ with open(image_path, "rb") as image_file:
+ encoded_string = base64.b64encode(image_file.read()).decode("utf-8")
+ return f"data:image/png;base64,{encoded_string}"
+
+def parse_glm_response(response: str) -> Dict[str, Any]:
+ """
+ Parse GLM-4.5V response to extract action and memory.
+
+ The special tokens <|begin_of_box|> and <|end_of_box|> mark bounding boxes.
+ Coordinates are normalized values between 0 and 1000.
+ """
+ # Extract action from between special tokens
+ pattern = r"<\|begin_of_box\|>(.*?)<\|end_of_box\|>"
+ match = re.search(pattern, response)
+ if match:
+ action = match.group(1).strip()
+ else:
+ # Fallback: look for function call patterns
+ action_pattern = r"[\w_]+\([^)]*\)"
+ matches = re.findall(action_pattern, response)
+ action = matches[0] if matches else None
+
+ # Extract memory section
+ memory_pattern = r"Memory:(.*?)$"
+ memory_match = re.search(memory_pattern, response, re.DOTALL)
+ memory = memory_match.group(1).strip() if memory_match else "[]"
+
+ # Extract action text (everything before Memory:)
+ action_text_pattern = r'^(.*?)Memory:'
+ action_text_match = re.search(action_text_pattern, response, re.DOTALL)
+ action_text = action_text_match.group(1).strip() if action_text_match else response
+
+ # Clean up action text by removing special tokens
+ if action_text:
+ action_text = action_text.replace("<|begin_of_box|>", "").replace("<|end_of_box|>", "")
+
+ return {
+ "action": action,
+ "action_text": action_text,
+ "memory": memory
+ }
+
+def get_last_image_from_messages(messages: Messages) -> Optional[str]:
+ """Extract the last image from messages for processing."""
+ for message in reversed(messages):
+ if isinstance(message, dict):
+ if message.get("type") == "computer_call_output":
+ output = message.get("output", {})
+ if isinstance(output, dict) and output.get("type") == "input_image":
+ image_url = output.get("image_url", "")
+ if isinstance(image_url, str) and image_url.startswith("data:image/"):
+ # Extract base64 part
+ return image_url.split(",", 1)[1]
+ elif message.get("role") == "user":
+ content = message.get("content", [])
+ if isinstance(content, list):
+ for item in reversed(content):
+ if isinstance(item, dict) and item.get("type") == "image_url":
+ image_url_obj = item.get("image_url", {})
+ if isinstance(image_url_obj, dict):
+ image_url = image_url_obj.get("url", "")
+ if isinstance(image_url, str) and image_url.startswith("data:image/"):
+ return image_url.split(",", 1)[1]
+ return None
+
+def convert_responses_items_to_glm45v_pc_prompt(messages: Messages, task: str, memory: str = "") -> List[Dict[str, Any]]:
+ """Convert responses items to GLM-4.5V PC prompt format with historical actions.
+
+ Args:
+ messages: List of message items from the conversation
+ task: The task description
+ memory: Current memory state
+
+ Returns:
+ List of content items for the prompt (text and image_url items)
+ """
+ action_space = GLM_ACTION_SPACE
+
+ # Template head
+ head_text = f"""You are a GUI Agent, and your primary task is to respond accurately to user requests or questions. In addition to directly answering the user's queries, you can also use tools or perform GUI operations directly until you fulfill the user's request or provide a correct answer. You should carefully read and understand the images and questions provided by the user, and engage in thinking and reflection when appropriate. The coordinates involved are all represented in thousandths (0-999).
+
+# Task:
+{task}
+
+# Task Platform
+Ubuntu
+
+# Action Space
+{action_space}
+
+# Historical Actions and Current Memory
+History:"""
+
+ # Template tail
+ tail_text = f"""
+Memory:
+{memory}
+# Output Format
+Plain text explanation with action(param='...')
+Memory:
+[{{"key": "value"}}, ...]
+
+# Some Additional Notes
+- I'll give you the most recent 4 history screenshots(shrunked to 50%*50%) along with the historical action steps.
+- You should put the key information you *have to remember* in a seperated memory part and I'll give it to you in the next round. The content in this part should be a dict list. If you no longer need some given information, you should remove it from the memory. Even if you don't need to remember anything, you should also output an empty list.
+- My computer's password is "password", feel free to use it when you need sudo rights.
+- For the thunderbird account "anonym-x2024@outlook.com", the password is "gTCI";=@y7|QJ0nDa_kN3Sb&>".
+
+Current Screenshot:
+"""
+
+ # Build history from messages
+ history = []
+ history_images = []
+
+ # Group messages into steps
+ current_step = []
+ step_num = 0
+
+ for message in messages:
+ msg_type = message.get("type")
+
+ if msg_type == "reasoning":
+ current_step.append(message)
+ elif msg_type == "message" and message.get("role") == "assistant":
+ current_step.append(message)
+ elif msg_type == "computer_call":
+ current_step.append(message)
+ elif msg_type == "computer_call_output":
+ current_step.append(message)
+ # End of step - process it
+ if current_step:
+ step_num += 1
+
+ # Extract bot thought from message content
+ bot_thought = ""
+ for item in current_step:
+ if item.get("type") == "message" and item.get("role") == "assistant":
+ content = item.get("content", [])
+ for content_item in content:
+ if content_item.get("type") == "output_text":
+ bot_thought = content_item.get("text", "")
+ break
+ break
+
+ # Extract action from computer_call
+ action_text = ""
+ for item in current_step:
+ if item.get("type") == "computer_call":
+ action = item.get("action", {})
+ action_type = action.get("type", "")
+
+ if action_type == "click":
+ x, y = action.get("x", 0), action.get("y", 0)
+ # Convert to 0-999 range (assuming screen dimensions)
+ # For now, use direct coordinates - this may need adjustment
+ action_text = f"left_click(start_box='[{x},{y}]')"
+ elif action_type == "double_click":
+ x, y = action.get("x", 0), action.get("y", 0)
+ action_text = f"left_double_click(start_box='[{x},{y}]')"
+ elif action_type == "right_click":
+ x, y = action.get("x", 0), action.get("y", 0)
+ action_text = f"right_click(start_box='[{x},{y}]')"
+ elif action_type == "drag":
+ # Handle drag with path
+ path = action.get("path", [])
+ if len(path) >= 2:
+ start = path[0]
+ end = path[-1]
+ action_text = f"left_drag(start_box='[{start.get('x', 0)},{start.get('y', 0)}]', end_box='[{end.get('x', 0)},{end.get('y', 0)}]')"
+ elif action_type == "keypress":
+ key = action.get("key", "")
+ action_text = f"key(keys='{key}')"
+ elif action_type == "type":
+ text = action.get("text", "")
+ action_text = f"type(content='{text}')"
+ elif action_type == "scroll":
+ x, y = action.get("x", 0), action.get("y", 0)
+ direction = action.get("direction", "down")
+ action_text = f"scroll(start_box='[{x},{y}]', direction='{direction}')"
+ elif action_type == "wait":
+ action_text = "WAIT()"
+ break
+
+ # Extract screenshot from computer_call_output
+ screenshot_url = None
+ for item in current_step:
+ if item.get("type") == "computer_call_output":
+ output = item.get("output", {})
+ if output.get("type") == "input_image":
+ screenshot_url = output.get("image_url", "")
+ break
+
+ # Store step info
+ step_info = {
+ "step_num": step_num,
+ "bot_thought": bot_thought,
+ "action_text": action_text,
+ "screenshot_url": screenshot_url
+ }
+ history.append(step_info)
+
+ # Store screenshot for last 4 steps
+ if screenshot_url:
+ history_images.append(screenshot_url)
+
+ current_step = []
+
+ # Build content array with head, history, and tail
+ content = []
+ current_text = head_text
+
+ total_history_steps = len(history)
+ history_image_count = min(4, len(history_images)) # Last 4 images
+
+ for step_idx, step_info in enumerate(history):
+ step_num = step_info["step_num"]
+ bot_thought = step_info["bot_thought"]
+ action_text = step_info["action_text"]
+
+ if step_idx < total_history_steps - history_image_count:
+ # For steps beyond the last 4, use text placeholder
+ current_text += f"\nstep {step_num}: Screenshot:(Omitted in context.) Thought: {bot_thought}\nAction: {action_text}"
+ else:
+ # For the last 4 steps, insert images
+ current_text += f"\nstep {step_num}: Screenshot:"
+ content.append({"type": "text", "text": current_text})
+
+ # Add image
+ img_idx = step_idx - (total_history_steps - history_image_count)
+ if img_idx < len(history_images):
+ content.append({"type": "image_url", "image_url": {"url": history_images[img_idx]}})
+
+ current_text = f" Thought: {bot_thought}\nAction: {action_text}"
+
+ # Add tail
+ current_text += tail_text
+ content.append({"type": "text", "text": current_text})
+
+ return content
+
+def model_dump(obj) -> Dict[str, Any]:
+ if isinstance(obj, dict):
+ return {k: model_dump(v) for k, v in obj.items()}
+ elif hasattr(obj, "model_dump"):
+ return obj.model_dump()
+ else:
+ return obj
+
+def convert_glm_completion_to_responses_items(response: ModelResponse, image_width: int, image_height: int) -> List[Dict[str, Any]]:
+ """
+ Convert GLM-4.5V completion response to responses items format.
+
+ Args:
+ response: LiteLLM ModelResponse from GLM-4.5V
+ image_width: Original image width for coordinate scaling
+ image_height: Original image height for coordinate scaling
+
+ Returns:
+ List of response items in the proper format
+ """
+ import uuid
+
+ response_items = []
+
+ if not response.choices or not response.choices[0].message:
+ return response_items
+
+ message = response.choices[0].message
+ content = message.content or ""
+ reasoning_content = getattr(message, 'reasoning_content', None)
+
+ # Add reasoning item if present
+ if reasoning_content:
+ reasoning_item = model_dump(make_reasoning_item(reasoning_content))
+ response_items.append(reasoning_item)
+
+ # Parse the content to extract action and text
+ parsed_response = parse_glm_response(content)
+ action = parsed_response.get("action", "")
+ action_text = parsed_response.get("action_text", "")
+
+ # Add message item with text content (excluding action and memory)
+ if action_text:
+ # Remove action from action_text if it's there
+ clean_text = action_text
+ if action and action in clean_text:
+ clean_text = clean_text.replace(action, "").strip()
+
+ # Remove memory section
+ memory_pattern = r"Memory:\s*\[.*?\]\s*$"
+ clean_text = re.sub(memory_pattern, "", clean_text, flags=re.DOTALL).strip()
+
+ if clean_text:
+ message_item = model_dump(make_output_text_item(clean_text))
+ response_items.append(message_item)
+
+ # Convert action to computer call if present
+ if action:
+ call_id = f"call_{uuid.uuid4().hex[:8]}"
+
+ # Parse different action types and create appropriate computer calls
+ if action.startswith("left_click"):
+ coord_match = re.search(r"start_box='?\[(\d+),\s*(\d+)\]'?", action)
+ if coord_match:
+ x, y = int(coord_match.group(1)), int(coord_match.group(2))
+ # Convert from 0-999 to actual pixel coordinates
+ actual_x = int((x / 999.0) * image_width)
+ actual_y = int((y / 999.0) * image_height)
+ computer_call = model_dump(make_click_item(actual_x, actual_y))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("right_click"):
+ coord_match = re.search(r"start_box='?\[(\d+),\s*(\d+)\]'?", action)
+ if coord_match:
+ x, y = int(coord_match.group(1)), int(coord_match.group(2))
+ actual_x = int((x / 999.0) * image_width)
+ actual_y = int((y / 999.0) * image_height)
+ computer_call = model_dump(make_click_item(actual_x, actual_y, button="right"))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("left_double_click"):
+ coord_match = re.search(r"start_box='?\[(\d+),\s*(\d+)\]'?", action)
+ if coord_match:
+ x, y = int(coord_match.group(1)), int(coord_match.group(2))
+ actual_x = int((x / 999.0) * image_width)
+ actual_y = int((y / 999.0) * image_height)
+ computer_call = model_dump(make_double_click_item(actual_x, actual_y))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("left_drag"):
+ start_match = re.search(r"start_box='?\[(\d+),\s*(\d+)\]'?", action)
+ end_match = re.search(r"end_box='?\[(\d+),\s*(\d+)\]'?", action)
+ if start_match and end_match:
+ x1, y1 = int(start_match.group(1)), int(start_match.group(2))
+ x2, y2 = int(end_match.group(1)), int(end_match.group(2))
+ actual_x1 = int((x1 / 999.0) * image_width)
+ actual_y1 = int((y1 / 999.0) * image_height)
+ actual_x2 = int((x2 / 999.0) * image_width)
+ actual_y2 = int((y2 / 999.0) * image_height)
+ # Create path for drag operation
+ drag_path = [{"x": actual_x1, "y": actual_y1}, {"x": actual_x2, "y": actual_y2}]
+ computer_call = model_dump(make_drag_item(drag_path))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("key"):
+ key_match = re.search(r"keys='([^']+)'", action)
+ if key_match:
+ keys = key_match.group(1)
+ # Split keys by '+' for key combinations, or use as single key
+ key_list = keys.split('+') if '+' in keys else [keys]
+ computer_call = model_dump(make_keypress_item(key_list))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("type"):
+ content_match = re.search(r"content='([^']*)'", action)
+ if content_match:
+ content = content_match.group(1)
+ computer_call = model_dump(make_type_item(content))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action.startswith("scroll"):
+ coord_match = re.search(r"start_box='?\[(\d+),\s*(\d+)\]'?", action)
+ direction_match = re.search(r"direction='([^']+)'", action)
+ if coord_match and direction_match:
+ x, y = int(coord_match.group(1)), int(coord_match.group(2))
+ direction = direction_match.group(1)
+ actual_x = int((x / 999.0) * image_width)
+ actual_y = int((y / 999.0) * image_height)
+ # Convert direction to scroll amounts
+ scroll_x, scroll_y = 0, 0
+ if direction == "up":
+ scroll_y = -5
+ elif direction == "down":
+ scroll_y = 5
+ elif direction == "left":
+ scroll_x = -5
+ elif direction == "right":
+ scroll_x = 5
+ computer_call = model_dump(make_scroll_item(actual_x, actual_y, scroll_x, scroll_y))
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ elif action == "WAIT()":
+ computer_call = model_dump(make_wait_item())
+ computer_call["call_id"] = call_id
+ computer_call["status"] = "completed"
+ response_items.append(computer_call)
+
+ return response_items
+
+@register_agent(models=r"(?i).*GLM-4\.5V.*")
+class Glm4vConfig(AsyncAgentConfig):
+ """GLM-4.5V agent configuration using liteLLM."""
+
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
+ **kwargs
+ ) -> Dict[str, Any]:
+ """
+ Predict the next step using GLM-4.5V model.
+
+ Args:
+ messages: Input messages following Responses format
+ model: Model name to use
+ tools: Optional list of tool schemas
+ max_retries: Maximum number of retries for API calls
+ stream: Whether to stream the response
+ computer_handler: Computer handler for taking screenshots
+ use_prompt_caching: Whether to use prompt caching
+ _on_api_start: Callback for API start
+ _on_api_end: Callback for API end
+ _on_usage: Callback for usage tracking
+ _on_screenshot: Callback for screenshot events
+
+ Returns:
+ Dict with "output" and "usage" keys
+ """
+ # Get the user instruction from the last user message
+ user_instruction = ""
+ for message in reversed(messages):
+ if isinstance(message, dict) and message.get("role") == "user":
+ content = message.get("content", "")
+ if isinstance(content, str):
+ user_instruction = content
+ elif isinstance(content, list):
+ for item in content:
+ if isinstance(item, dict) and item.get("type") == "text":
+ user_instruction = item.get("text", "")
+ break
+ break
+
+ # Get the last image for processing
+ last_image_b64 = get_last_image_from_messages(messages)
+ if not last_image_b64 and computer_handler:
+ # Take a screenshot if no image available
+ screenshot_b64 = await computer_handler.screenshot()
+ if screenshot_b64:
+ last_image_b64 = screenshot_b64
+ if _on_screenshot:
+ await _on_screenshot(screenshot_b64)
+
+ if not last_image_b64:
+ raise ValueError("No image available for GLM-4.5V processing")
+
+ # Convert responses items to GLM-4.5V PC prompt format with historical actions
+ prompt_content = convert_responses_items_to_glm45v_pc_prompt(
+ messages=messages,
+ task=user_instruction,
+ memory="[]" # Initialize with empty memory for now
+ )
+
+ # Add the current screenshot to the end
+ prompt_content.append({
+ "type": "image_url",
+ "image_url": {"url": f"data:image/png;base64,{last_image_b64}"}
+ })
+
+ # Prepare messages for liteLLM
+ litellm_messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful GUI agent assistant."
+ },
+ {
+ "role": "user",
+ "content": prompt_content
+ }
+ ]
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": litellm_messages,
+ # "max_tokens": 2048,
+ # "temperature": 0.001,
+ # "extra_body": {
+ # "skip_special_tokens": False,
+ # }
+ }
+
+ # Add API callbacks
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ # Call liteLLM
+ response = await litellm.acompletion(**api_kwargs)
+
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Get image dimensions for coordinate scaling
+ image_width, image_height = 1920, 1080 # Default dimensions
+
+ # Try to get actual dimensions from the image
+ try:
+ image_data = base64.b64decode(last_image_b64)
+ image = Image.open(BytesIO(image_data))
+ image_width, image_height = image.size
+ except Exception:
+ pass # Use default dimensions
+
+ # Convert GLM completion response to responses items
+ response_items = convert_glm_completion_to_responses_items(response, image_width, image_height)
+
+ # Extract usage information
+ response_usage = {
+ **LiteLLMCompletionResponsesConfig._transform_chat_completion_usage_to_responses_usage(response.usage).model_dump(),
+ "response_cost": response._hidden_params.get("response_cost", 0.0),
+ }
+ if _on_usage:
+ await _on_usage(response_usage)
+
+ # Create agent response
+ agent_response = {
+ "output": response_items,
+ "usage": response_usage
+ }
+
+ return agent_response
+
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str,
+ **kwargs
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates using GLM-4.5V model.
+
+ Args:
+ model: Model name to use
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ Tuple with (x, y) coordinates or None
+ """
+ try:
+ # Create a simple click instruction prompt
+ click_prompt = f"""You are a GUI agent. Look at the screenshot and identify where to click for: {instruction}
+
+Respond with a single click action in this format:
+left_click(start_box='[x,y]')
+
+Where x,y are coordinates normalized to 0-999 range."""
+
+ # Prepare messages for liteLLM
+ litellm_messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful GUI agent assistant."
+ },
+ {
+ "role": "user",
+ "content": [
+ {"type": "text", "text": click_prompt},
+ {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_b64}"}}
+ ]
+ }
+ ]
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": litellm_messages,
+ "max_tokens": 100,
+ "temperature": 0.001,
+ "extra_body": {
+ "skip_special_tokens": False,
+ }
+ }
+
+ # Call liteLLM
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Extract response content
+ response_content = response.choices[0].message.content.strip()
+
+ # Parse response for click coordinates
+ # Look for coordinates in the response, handling special tokens
+ coord_pattern = r"<\|begin_of_box\|>.*?left_click\(start_box='?\[(\d+),(\d+)\]'?\).*?<\|end_of_box\|>"
+ match = re.search(coord_pattern, response_content)
+
+ if not match:
+ # Fallback: look for coordinates without special tokens
+ coord_pattern = r"left_click\(start_box='?\[(\d+),(\d+)\]'?\)"
+ match = re.search(coord_pattern, response_content)
+
+ if match:
+ x, y = int(match.group(1)), int(match.group(2))
+
+ # Get actual image dimensions for scaling
+ try:
+ image_data = base64.b64decode(image_b64)
+ image = Image.open(BytesIO(image_data))
+ image_width, image_height = image.size
+ except Exception:
+ # Use default dimensions
+ image_width, image_height = 1920, 1080
+
+ # Convert from 0-999 normalized coordinates to actual pixel coordinates
+ actual_x = int((x / 999.0) * image_width)
+ actual_y = int((y / 999.0) * image_height)
+
+ return (actual_x, actual_y)
+
+ return None
+
+ except Exception as e:
+ # Log error and return None
+ print(f"Error in predict_click: {e}")
+ return None
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """
+ Get list of capabilities supported by this agent config.
+
+ Returns:
+ List of capability strings
+ """
+ return ["step", "click"]
diff --git a/libs/python/agent/agent/loops/gta1.py b/libs/python/agent/agent/loops/gta1.py
new file mode 100644
index 00000000..13678b48
--- /dev/null
+++ b/libs/python/agent/agent/loops/gta1.py
@@ -0,0 +1,178 @@
+"""
+GTA1 agent loop implementation for click prediction using litellm.acompletion
+Paper: https://arxiv.org/pdf/2507.05791
+Code: https://github.com/Yan98/GTA1
+"""
+
+import asyncio
+import json
+import re
+import base64
+from typing import Dict, List, Any, AsyncGenerator, Union, Optional, Tuple
+from io import BytesIO
+import uuid
+from PIL import Image
+import litellm
+import math
+
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
+from ..loops.base import AsyncAgentConfig
+
+SYSTEM_PROMPT = '''
+You are an expert UI element locator. Given a GUI image and a user's element description, provide the coordinates of the specified element as a single (x,y) point. The image resolution is height {height} and width {width}. For elements with area, return the center point.
+
+Output the coordinate pair exactly:
+(x,y)
+'''.strip()
+
+def extract_coordinates(raw_string: str) -> Tuple[float, float]:
+ """Extract coordinates from model output."""
+ try:
+ matches = re.findall(r"\((-?\d*\.?\d+),\s*(-?\d*\.?\d+)\)", raw_string)
+ return tuple(map(float, matches[0])) # type: ignore
+ except:
+ return (0.0, 0.0)
+
+def smart_resize(height: int, width: int, factor: int = 28, min_pixels: int = 3136, max_pixels: int = 8847360) -> Tuple[int, int]:
+ """Smart resize function similar to qwen_vl_utils."""
+ # Calculate the total pixels
+ total_pixels = height * width
+
+ # If already within bounds, return original dimensions
+ if min_pixels <= total_pixels <= max_pixels:
+ # Round to nearest factor
+ new_height = (height // factor) * factor
+ new_width = (width // factor) * factor
+ return new_height, new_width
+
+ # Calculate scaling factor
+ if total_pixels > max_pixels:
+ scale = (max_pixels / total_pixels) ** 0.5
+ else:
+ scale = (min_pixels / total_pixels) ** 0.5
+
+ # Apply scaling
+ new_height = int(height * scale)
+ new_width = int(width * scale)
+
+ # Round to nearest factor
+ new_height = (new_height // factor) * factor
+ new_width = (new_width // factor) * factor
+
+ # Ensure minimum size
+ new_height = max(new_height, factor)
+ new_width = max(new_width, factor)
+
+ return new_height, new_width
+
+@register_agent(models=r".*GTA1.*")
+class GTA1Config(AsyncAgentConfig):
+ """GTA1 agent configuration implementing AsyncAgentConfig protocol for click prediction."""
+
+ def __init__(self):
+ self.current_model = None
+ self.last_screenshot_b64 = None
+
+
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
+ **kwargs
+ ) -> Dict[str, Any]:
+ raise NotImplementedError()
+
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str,
+ **kwargs
+ ) -> Optional[Tuple[float, float]]:
+ """
+ Predict click coordinates using GTA1 model via litellm.acompletion.
+
+ Args:
+ model: The GTA1 model name
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ # Decode base64 image
+ image_data = base64.b64decode(image_b64)
+ image = Image.open(BytesIO(image_data))
+ width, height = image.width, image.height
+
+ # Smart resize the image (similar to qwen_vl_utils)
+ resized_height, resized_width = smart_resize(
+ height, width,
+ factor=28, # Default factor for Qwen models
+ min_pixels=3136,
+ max_pixels=4096 * 2160
+ )
+ resized_image = image.resize((resized_width, resized_height))
+ scale_x, scale_y = width / resized_width, height / resized_height
+
+ # Convert resized image back to base64
+ buffered = BytesIO()
+ resized_image.save(buffered, format="PNG")
+ resized_image_b64 = base64.b64encode(buffered.getvalue()).decode()
+
+ # Prepare system and user messages
+ system_message = {
+ "role": "system",
+ "content": SYSTEM_PROMPT.format(height=resized_height, width=resized_width)
+ }
+
+ user_message = {
+ "role": "user",
+ "content": [
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/png;base64,{resized_image_b64}"
+ }
+ },
+ {
+ "type": "text",
+ "text": instruction
+ }
+ ]
+ }
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": [system_message, user_message],
+ "max_tokens": 32,
+ "temperature": 0.0,
+ **kwargs
+ }
+
+ # Use liteLLM acompletion
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Extract response text
+ output_text = response.choices[0].message.content # type: ignore
+
+ # Extract and rescale coordinates
+ pred_x, pred_y = extract_coordinates(output_text) # type: ignore
+ pred_x *= scale_x
+ pred_y *= scale_y
+
+ return (math.floor(pred_x), math.floor(pred_y))
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """Return the capabilities supported by this agent."""
+ return ["click"]
diff --git a/libs/python/agent/agent/loops/model_types.csv b/libs/python/agent/agent/loops/model_types.csv
new file mode 100644
index 00000000..e43d4fb1
--- /dev/null
+++ b/libs/python/agent/agent/loops/model_types.csv
@@ -0,0 +1,6 @@
+model,predict_step,predict_point
+anthropic,✅,✅
+openai,✅,✅
+uitars,✅,✅
+omniparser,❌,✅
+gta1,❌,✅
\ No newline at end of file
diff --git a/libs/python/agent/agent/loops/omniparser.py b/libs/python/agent/agent/loops/omniparser.py
index f0e7832a..d85d07de 100644
--- a/libs/python/agent/agent/loops/omniparser.py
+++ b/libs/python/agent/agent/loops/omniparser.py
@@ -1,5 +1,7 @@
"""
OpenAI computer-use-preview agent loop implementation using liteLLM
+Paper: https://arxiv.org/abs/2408.00203
+Code: https://github.com/microsoft/OmniParser
"""
import asyncio
@@ -9,8 +11,9 @@ import litellm
import inspect
import base64
-from ..decorators import agent_loop
-from ..types import Messages, AgentResponse, Tools
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
+from ..loops.base import AsyncAgentConfig
SOM_TOOL_SCHEMA = {
"type": "function",
@@ -246,94 +249,185 @@ async def replace_computer_call_with_function(item: Dict[str, Any], xy2id: Dict[
return [item]
-@agent_loop(models=r"omniparser\+.*|omni\+.*", priority=10)
-async def omniparser_loop(
- messages: Messages,
- model: str,
- tools: Optional[List[Dict[str, Any]]] = None,
- max_retries: Optional[int] = None,
- stream: bool = False,
- computer_handler=None,
- use_prompt_caching: Optional[bool] = False,
- _on_api_start=None,
- _on_api_end=None,
- _on_usage=None,
- _on_screenshot=None,
- **kwargs
-) -> Union[AgentResponse, AsyncGenerator[Dict[str, Any], None]]:
- """
- OpenAI computer-use-preview agent loop using liteLLM responses.
+@register_agent(models=r"omniparser\+.*|omni\+.*", priority=2)
+class OmniparserConfig(AsyncAgentConfig):
+ """Omniparser agent configuration implementing AsyncAgentConfig protocol."""
- Supports OpenAI's computer use preview models.
- """
- if not OMNIPARSER_AVAILABLE:
- raise ValueError("omniparser loop requires som to be installed. Install it with `pip install cua-som`.")
-
- tools = tools or []
-
- llm_model = model.split('+')[-1]
-
- # Prepare tools for OpenAI API
- openai_tools, id2xy = _prepare_tools_for_omniparser(tools)
-
- # Find last computer_call_output
- last_computer_call_output = get_last_computer_call_output(messages)
- if last_computer_call_output:
- image_url = last_computer_call_output.get("output", {}).get("image_url", "")
- image_data = image_url.split(",")[-1]
- if image_data:
- parser = get_parser()
- result = parser.parse(image_data)
- if _on_screenshot:
- await _on_screenshot(result.annotated_image_base64, "annotated_image")
- for element in result.elements:
- id2xy[element.id] = ((element.bbox.x1 + element.bbox.x2) / 2, (element.bbox.y1 + element.bbox.y2) / 2)
-
- # handle computer calls -> function calls
- new_messages = []
- for message in messages:
- if not isinstance(message, dict):
- message = message.__dict__
- new_messages += await replace_computer_call_with_function(message, id2xy)
- messages = new_messages
-
- # Prepare API call kwargs
- api_kwargs = {
- "model": llm_model,
- "input": messages,
- "tools": openai_tools if openai_tools else None,
- "stream": stream,
- "reasoning": {"summary": "concise"},
- "truncation": "auto",
- "num_retries": max_retries,
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
**kwargs
- }
+ ) -> Dict[str, Any]:
+ """
+ OpenAI computer-use-preview agent loop using liteLLM responses.
+
+ Supports OpenAI's computer use preview models.
+ """
+ if not OMNIPARSER_AVAILABLE:
+ raise ValueError("omniparser loop requires som to be installed. Install it with `pip install cua-som`.")
+
+ tools = tools or []
+
+ llm_model = model.split('+')[-1]
+
+ # Prepare tools for OpenAI API
+ openai_tools, id2xy = _prepare_tools_for_omniparser(tools)
+
+ # Find last computer_call_output
+ last_computer_call_output = get_last_computer_call_output(messages) # type: ignore
+ if last_computer_call_output:
+ image_url = last_computer_call_output.get("output", {}).get("image_url", "")
+ image_data = image_url.split(",")[-1]
+ if image_data:
+ parser = get_parser()
+ result = parser.parse(image_data)
+ if _on_screenshot:
+ await _on_screenshot(result.annotated_image_base64, "annotated_image")
+ for element in result.elements:
+ id2xy[element.id] = ((element.bbox.x1 + element.bbox.x2) / 2, (element.bbox.y1 + element.bbox.y2) / 2)
+
+ # handle computer calls -> function calls
+ new_messages = []
+ for message in messages:
+ if not isinstance(message, dict):
+ message = message.__dict__
+ new_messages += await replace_computer_call_with_function(message, id2xy) # type: ignore
+ messages = new_messages
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": llm_model,
+ "input": messages,
+ "tools": openai_tools if openai_tools else None,
+ "stream": stream,
+ "truncation": "auto",
+ "num_retries": max_retries,
+ **kwargs
+ }
+
+ # Call API start hook
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ print(str(api_kwargs)[:1000])
+
+ # Use liteLLM responses
+ response = await litellm.aresponses(**api_kwargs)
+
+ # Call API end hook
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Extract usage information
+ usage = {
+ **response.usage.model_dump(), # type: ignore
+ "response_cost": response._hidden_params.get("response_cost", 0.0), # type: ignore
+ }
+ if _on_usage:
+ await _on_usage(usage)
+
+ # handle som function calls -> xy computer calls
+ new_output = []
+ for i in range(len(response.output)): # type: ignore
+ new_output += await replace_function_with_computer_call(response.output[i].model_dump(), id2xy) # type: ignore
+
+ return {
+ "output": new_output,
+ "usage": usage
+ }
- # Call API start hook
- if _on_api_start:
- await _on_api_start(api_kwargs)
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str,
+ **kwargs
+ ) -> Optional[Tuple[float, float]]:
+ """
+ Predict click coordinates using OmniParser and LLM.
+
+ Uses OmniParser to annotate the image with element IDs, then uses LLM
+ to identify the correct element ID based on the instruction.
+ """
+ if not OMNIPARSER_AVAILABLE:
+ return None
+
+ # Parse the image with OmniParser to get annotated image and elements
+ parser = get_parser()
+ result = parser.parse(image_b64)
+
+ # Extract the LLM model from composed model string
+ llm_model = model.split('+')[-1]
+
+ # Create system prompt for element ID prediction
+ SYSTEM_PROMPT = f'''
+You are an expert UI element locator. Given a GUI image annotated with numerical IDs over each interactable element, along with a user's element description, provide the ID of the specified element.
+
+The image shows UI elements with numbered overlays. Each number corresponds to a clickable/interactable element.
+
+Output only the element ID as a single integer.
+'''.strip()
+
+ # Prepare messages for LLM
+ messages = [
+ {
+ "role": "system",
+ "content": SYSTEM_PROMPT
+ },
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/png;base64,{result.annotated_image_base64}"
+ }
+ },
+ {
+ "type": "text",
+ "text": f"Find the element: {instruction}"
+ }
+ ]
+ }
+ ]
+
+ # Call LLM to predict element ID
+ response = await litellm.acompletion(
+ model=llm_model,
+ messages=messages,
+ max_tokens=10,
+ temperature=0.1
+ )
+
+ # Extract element ID from response
+ response_text = response.choices[0].message.content.strip() # type: ignore
+
+ # Try to parse the element ID
+ try:
+ element_id = int(response_text)
+
+ # Find the element with this ID and return its center coordinates
+ for element in result.elements:
+ if element.id == element_id:
+ center_x = (element.bbox.x1 + element.bbox.x2) / 2
+ center_y = (element.bbox.y1 + element.bbox.y2) / 2
+ return (center_x, center_y)
+ except ValueError:
+ # If we can't parse the ID, return None
+ pass
+
+ return None
- print(str(api_kwargs)[:1000])
-
- # Use liteLLM responses
- response = await litellm.aresponses(**api_kwargs)
-
- # Call API end hook
- if _on_api_end:
- await _on_api_end(api_kwargs, response)
-
- # Extract usage information
- response.usage = {
- **response.usage.model_dump(),
- "response_cost": response._hidden_params.get("response_cost", 0.0),
- }
- if _on_usage:
- await _on_usage(response.usage)
-
- # handle som function calls -> xy computer calls
- new_output = []
- for i in range(len(response.output)):
- new_output += await replace_function_with_computer_call(response.output[i].model_dump(), id2xy)
- response.output = new_output
-
- return response
+ def get_capabilities(self) -> List[AgentCapability]:
+ """Return the capabilities supported by this agent."""
+ return ["step"]
diff --git a/libs/python/agent/agent/loops/openai.py b/libs/python/agent/agent/loops/openai.py
index 84b79d1f..bb6a13a6 100644
--- a/libs/python/agent/agent/loops/openai.py
+++ b/libs/python/agent/agent/loops/openai.py
@@ -3,31 +3,49 @@ OpenAI computer-use-preview agent loop implementation using liteLLM
"""
import asyncio
+import base64
import json
-from typing import Dict, List, Any, AsyncGenerator, Union, Optional
+from io import BytesIO
+from typing import Dict, List, Any, AsyncGenerator, Union, Optional, Tuple
import litellm
+from PIL import Image
-from ..decorators import agent_loop
-from ..types import Messages, AgentResponse, Tools
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
-def _map_computer_tool_to_openai(computer_tool: Any) -> Dict[str, Any]:
+async def _map_computer_tool_to_openai(computer_handler: Any) -> Dict[str, Any]:
"""Map a computer tool to OpenAI's computer-use-preview tool schema"""
+ # Get dimensions from the computer handler
+ try:
+ width, height = await computer_handler.get_dimensions()
+ except Exception:
+ # Fallback to default dimensions if method fails
+ width, height = 1024, 768
+
+ # Get environment from the computer handler
+ try:
+ environment = await computer_handler.get_environment()
+ except Exception:
+ # Fallback to default environment if method fails
+ environment = "linux"
+
return {
"type": "computer_use_preview",
- "display_width": getattr(computer_tool, 'display_width', 1024),
- "display_height": getattr(computer_tool, 'display_height', 768),
- "environment": getattr(computer_tool, 'environment', "linux") # mac, windows, linux, browser
+ "display_width": width,
+ "display_height": height,
+ "environment": environment # mac, windows, linux, browser
}
-def _prepare_tools_for_openai(tool_schemas: List[Dict[str, Any]]) -> Tools:
+async def _prepare_tools_for_openai(tool_schemas: List[Dict[str, Any]]) -> Tools:
"""Prepare tools for OpenAI API format"""
openai_tools = []
for schema in tool_schemas:
if schema["type"] == "computer":
# Map computer tool to OpenAI format
- openai_tools.append(_map_computer_tool_to_openai(schema["computer"]))
+ computer_tool = await _map_computer_tool_to_openai(schema["computer"])
+ openai_tools.append(computer_tool)
elif schema["type"] == "function":
# Function tools use OpenAI-compatible schema directly (liteLLM expects this format)
# Schema should be: {type, name, description, parameters}
@@ -36,60 +54,182 @@ def _prepare_tools_for_openai(tool_schemas: List[Dict[str, Any]]) -> Tools:
return openai_tools
-@agent_loop(models=r".*computer-use-preview.*", priority=10)
-async def openai_computer_use_loop(
- messages: Messages,
- model: str,
- tools: Optional[List[Dict[str, Any]]] = None,
- max_retries: Optional[int] = None,
- stream: bool = False,
- computer_handler=None,
- use_prompt_caching: Optional[bool] = False,
- _on_api_start=None,
- _on_api_end=None,
- _on_usage=None,
- _on_screenshot=None,
- **kwargs
-) -> Union[AgentResponse, AsyncGenerator[Dict[str, Any], None]]:
+@register_agent(models=r".*computer-use-preview.*")
+class OpenAIComputerUseConfig:
"""
- OpenAI computer-use-preview agent loop using liteLLM responses.
+ OpenAI computer-use-preview agent configuration using liteLLM responses.
Supports OpenAI's computer use preview models.
"""
- tools = tools or []
- # Prepare tools for OpenAI API
- openai_tools = _prepare_tools_for_openai(tools)
-
- # Prepare API call kwargs
- api_kwargs = {
- "model": model,
- "input": messages,
- "tools": openai_tools if openai_tools else None,
- "stream": stream,
- "reasoning": {"summary": "concise"},
- "truncation": "auto",
- "num_retries": max_retries,
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
**kwargs
- }
-
- # Call API start hook
- if _on_api_start:
- await _on_api_start(api_kwargs)
-
- # Use liteLLM responses
- response = await litellm.aresponses(**api_kwargs)
-
- # Call API end hook
- if _on_api_end:
- await _on_api_end(api_kwargs, response)
+ ) -> Dict[str, Any]:
+ """
+ Predict the next step based on input items.
+
+ Args:
+ messages: Input items following Responses format
+ model: Model name to use
+ tools: Optional list of tool schemas
+ max_retries: Maximum number of retries
+ stream: Whether to stream responses
+ computer_handler: Computer handler instance
+ _on_api_start: Callback for API start
+ _on_api_end: Callback for API end
+ _on_usage: Callback for usage tracking
+ _on_screenshot: Callback for screenshot events
+ **kwargs: Additional arguments
+
+ Returns:
+ Dictionary with "output" (output items) and "usage" array
+ """
+ tools = tools or []
+
+ # Prepare tools for OpenAI API
+ openai_tools = await _prepare_tools_for_openai(tools)
- # Extract usage information
- response.usage = {
- **response.usage.model_dump(),
- "response_cost": response._hidden_params.get("response_cost", 0.0),
- }
- if _on_usage:
- await _on_usage(response.usage)
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "input": messages,
+ "tools": openai_tools if openai_tools else None,
+ "stream": stream,
+ "reasoning": {"summary": "concise"},
+ "truncation": "auto",
+ "num_retries": max_retries,
+ **kwargs
+ }
+
+ # Call API start hook
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ # Use liteLLM responses
+ response = await litellm.aresponses(**api_kwargs)
+
+ # Call API end hook
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Extract usage information
+ usage = {
+ **response.usage.model_dump(),
+ "response_cost": response._hidden_params.get("response_cost", 0.0),
+ }
+ if _on_usage:
+ await _on_usage(usage)
+
+ # Return in the expected format
+ output_dict = response.model_dump()
+ output_dict["usage"] = usage
+ return output_dict
- return response
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates based on image and instruction.
+
+ Uses OpenAI computer-use-preview with manually constructed input items
+ and a prompt that instructs the agent to only output clicks.
+
+ Args:
+ model: Model name to use
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ # TODO: use computer tool to get dimensions + environment
+ # Manually construct input items with image and click instruction
+ input_items = [
+ {
+ "role": "user",
+ "content": f"You are a UI grounding expert. Look at the image and {instruction}. Output ONLY a click action on the target element. No explanations, confirmations, or additional text."
+ },
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "input_image",
+ "image_url": f"data:image/png;base64,{image_b64}"
+ }
+ ]
+ }
+ ]
+
+ # Get image dimensions from base64 data
+ try:
+ image_data = base64.b64decode(image_b64)
+ image = Image.open(BytesIO(image_data))
+ display_width, display_height = image.size
+ except Exception:
+ # Fallback to default dimensions if image parsing fails
+ display_width, display_height = 1024, 768
+
+ # Prepare computer tool for click actions
+ computer_tool = {
+ "type": "computer_use_preview",
+ "display_width": display_width,
+ "display_height": display_height,
+ "environment": "windows"
+ }
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "input": input_items,
+ "tools": [computer_tool],
+ "stream": False,
+ "reasoning": {"summary": "concise"},
+ "truncation": "auto",
+ "max_tokens": 100 # Keep response short for click prediction
+ }
+
+ # Use liteLLM responses
+ response = await litellm.aresponses(**api_kwargs)
+
+ # Extract click coordinates from response output
+ output_dict = response.model_dump()
+ output_items = output_dict.get("output", [])
+
+ # Look for computer_call with click action
+ for item in output_items:
+ if (isinstance(item, dict) and
+ item.get("type") == "computer_call" and
+ isinstance(item.get("action"), dict)):
+
+ action = item["action"]
+ if action.get("type") == "click":
+ x = action.get("x")
+ y = action.get("y")
+ if x is not None and y is not None:
+ return (int(x), int(y))
+
+ return None
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """
+ Get list of capabilities supported by this agent config.
+
+ Returns:
+ List of capability strings
+ """
+ return ["click", "step"]
diff --git a/libs/python/agent/agent/loops/uitars.py b/libs/python/agent/agent/loops/uitars.py
index e82e005d..2c1b41b9 100644
--- a/libs/python/agent/agent/loops/uitars.py
+++ b/libs/python/agent/agent/loops/uitars.py
@@ -1,5 +1,7 @@
"""
UITARS agent loop implementation using liteLLM for ByteDance-Seed/UI-TARS-1.5-7B
+Paper: https://arxiv.org/abs/2501.12326
+Code: https://github.com/bytedance/UI-TARS
"""
import asyncio
@@ -9,7 +11,7 @@ import base64
import math
import re
import ast
-from typing import Dict, List, Any, AsyncGenerator, Union, Optional
+from typing import Dict, List, Any, AsyncGenerator, Union, Optional, Tuple
from io import BytesIO
from PIL import Image
import litellm
@@ -21,8 +23,8 @@ from openai.types.responses.response_input_param import ComputerCallOutput
from openai.types.responses.response_output_message_param import ResponseOutputMessageParam
from openai.types.responses.response_reasoning_item_param import ResponseReasoningItemParam, Summary
-from ..decorators import agent_loop
-from ..types import Messages, AgentResponse, Tools
+from ..decorators import register_agent
+from ..types import Messages, AgentResponse, Tools, AgentCapability
from ..responses import (
make_reasoning_item,
make_output_text_item,
@@ -79,6 +81,18 @@ Action: ...
{instruction}
"""
+GROUNDING_UITARS_PROMPT_TEMPLATE = """You are a GUI agent. You are given a task and your action history, with screenshots. You need to perform the next action to complete the task.
+
+## Output Format
+
+Action: ...
+
+
+## Action Space
+click(point='<|box_start|>(x1,y1)<|box_end|>')
+
+## User Instruction
+{instruction}"""
def round_by_factor(number: float, factor: int) -> int:
"""Returns the closest integer to 'number' that is divisible by 'factor'."""
@@ -501,188 +515,301 @@ def convert_uitars_messages_to_litellm(messages: Messages) -> List[Dict[str, Any
return litellm_messages
-@agent_loop(models=r"(?i).*ui-?tars.*", priority=10)
-async def uitars_loop(
- messages: Messages,
- model: str,
- tools: Optional[List[Dict[str, Any]]] = None,
- max_retries: Optional[int] = None,
- stream: bool = False,
- computer_handler=None,
- use_prompt_caching: Optional[bool] = False,
- _on_api_start=None,
- _on_api_end=None,
- _on_usage=None,
- _on_screenshot=None,
- **kwargs
-) -> Union[AgentResponse, AsyncGenerator[Dict[str, Any], None]]:
+@register_agent(models=r"(?i).*ui-?tars.*")
+class UITARSConfig:
"""
- UITARS agent loop using liteLLM for ByteDance-Seed/UI-TARS-1.5-7B model.
+ UITARS agent configuration using liteLLM for ByteDance-Seed/UI-TARS-1.5-7B model.
Supports UITARS vision-language models for computer control.
"""
- tools = tools or []
- # Create response items
- response_items = []
-
- # Find computer tool for screen dimensions
- computer_tool = None
- for tool_schema in tools:
- if tool_schema["type"] == "computer":
- computer_tool = tool_schema["computer"]
- break
-
- # Get screen dimensions
- screen_width, screen_height = 1024, 768
- if computer_tool:
- try:
- screen_width, screen_height = await computer_tool.get_dimensions()
- except:
- pass
-
- # Process messages to extract instruction and image
- instruction = ""
- image_data = None
-
- # Convert messages to list if string
- if isinstance(messages, str):
- messages = [{"role": "user", "content": messages}]
-
- # Extract instruction and latest screenshot
- for message in reversed(messages):
- if isinstance(message, dict):
- content = message.get("content", "")
+ async def predict_step(
+ self,
+ messages: List[Dict[str, Any]],
+ model: str,
+ tools: Optional[List[Dict[str, Any]]] = None,
+ max_retries: Optional[int] = None,
+ stream: bool = False,
+ computer_handler=None,
+ use_prompt_caching: Optional[bool] = False,
+ _on_api_start=None,
+ _on_api_end=None,
+ _on_usage=None,
+ _on_screenshot=None,
+ **kwargs
+ ) -> Dict[str, Any]:
+ """
+ Predict the next step based on input messages.
+
+ Args:
+ messages: Input messages following Responses format
+ model: Model name to use
+ tools: Optional list of tool schemas
+ max_retries: Maximum number of retries
+ stream: Whether to stream responses
+ computer_handler: Computer handler instance
+ _on_api_start: Callback for API start
+ _on_api_end: Callback for API end
+ _on_usage: Callback for usage tracking
+ _on_screenshot: Callback for screenshot events
+ **kwargs: Additional arguments
- # Handle different content formats
- if isinstance(content, str):
- if not instruction and message.get("role") == "user":
- instruction = content
- elif isinstance(content, list):
- for item in content:
- if isinstance(item, dict):
- if item.get("type") == "text" and not instruction:
- instruction = item.get("text", "")
- elif item.get("type") == "image_url" and not image_data:
- image_url = item.get("image_url", {})
- if isinstance(image_url, dict):
- image_data = image_url.get("url", "")
- else:
- image_data = image_url
+ Returns:
+ Dictionary with "output" (output items) and "usage" array
+ """
+ tools = tools or []
- # Also check for computer_call_output with screenshots
- if message.get("type") == "computer_call_output" and not image_data:
- output = message.get("output", {})
- if isinstance(output, dict) and output.get("type") == "input_image":
- image_data = output.get("image_url", "")
+ # Create response items
+ response_items = []
- if instruction and image_data:
- break
-
- if not instruction:
- instruction = "Help me complete this task by analyzing the screen and taking appropriate actions."
-
- # Create prompt
- user_prompt = UITARS_PROMPT_TEMPLATE.format(
- instruction=instruction,
- action_space=UITARS_ACTION_SPACE,
- language="English"
- )
-
- # Convert conversation history to LiteLLM format
- history_messages = convert_uitars_messages_to_litellm(messages)
-
- # Prepare messages for liteLLM
- litellm_messages = [
- {
- "role": "system",
- "content": "You are a helpful assistant."
- }
- ]
-
- # Add current user instruction with screenshot
- current_user_message = {
- "role": "user",
- "content": [
- {"type": "text", "text": user_prompt},
+ # Find computer tool for screen dimensions
+ computer_tool = None
+ for tool_schema in tools:
+ if tool_schema["type"] == "computer":
+ computer_tool = tool_schema["computer"]
+ break
+
+ # Get screen dimensions
+ screen_width, screen_height = 1024, 768
+ if computer_tool:
+ try:
+ screen_width, screen_height = await computer_tool.get_dimensions()
+ except:
+ pass
+
+ # Process messages to extract instruction and image
+ instruction = ""
+ image_data = None
+
+ # Convert messages to list if string
+ if isinstance(messages, str):
+ messages = [{"role": "user", "content": messages}]
+
+ # Extract instruction and latest screenshot
+ for message in reversed(messages):
+ if isinstance(message, dict):
+ content = message.get("content", "")
+
+ # Handle different content formats
+ if isinstance(content, str):
+ if not instruction and message.get("role") == "user":
+ instruction = content
+ elif isinstance(content, list):
+ for item in content:
+ if isinstance(item, dict):
+ if item.get("type") == "text" and not instruction:
+ instruction = item.get("text", "")
+ elif item.get("type") == "image_url" and not image_data:
+ image_url = item.get("image_url", {})
+ if isinstance(image_url, dict):
+ image_data = image_url.get("url", "")
+ else:
+ image_data = image_url
+
+ # Also check for computer_call_output with screenshots
+ if message.get("type") == "computer_call_output" and not image_data:
+ output = message.get("output", {})
+ if isinstance(output, dict) and output.get("type") == "input_image":
+ image_data = output.get("image_url", "")
+
+ if instruction and image_data:
+ break
+
+ if not instruction:
+ instruction = "Help me complete this task by analyzing the screen and taking appropriate actions."
+
+ # Create prompt
+ user_prompt = UITARS_PROMPT_TEMPLATE.format(
+ instruction=instruction,
+ action_space=UITARS_ACTION_SPACE,
+ language="English"
+ )
+
+ # Convert conversation history to LiteLLM format
+ history_messages = convert_uitars_messages_to_litellm(messages)
+
+ # Prepare messages for liteLLM
+ litellm_messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant."
+ }
]
- }
- litellm_messages.append(current_user_message)
-
- # Process image for UITARS
- if not image_data:
- # Take screenshot if none found in messages
- if computer_handler:
- image_data = await computer_handler.screenshot()
- await _on_screenshot(image_data, "screenshot_before")
- # Add screenshot to output items so it can be retained in history
- response_items.append(make_input_image_item(image_data))
- else:
- raise ValueError("No screenshot found in messages and no computer_handler provided")
- processed_image, original_width, original_height = process_image_for_uitars(image_data)
- encoded_image = pil_to_base64(processed_image)
-
- # Add conversation history
- if history_messages:
- litellm_messages.extend(history_messages)
- else:
- litellm_messages.append({
- "role": "user",
+ # Add current user instruction with screenshot
+ current_user_message = {
+ "role": "user",
"content": [
- {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{encoded_image}"}}
+ {"type": "text", "text": user_prompt},
]
- })
+ }
+ litellm_messages.append(current_user_message)
+
+ # Process image for UITARS
+ if not image_data:
+ # Take screenshot if none found in messages
+ if computer_handler:
+ image_data = await computer_handler.screenshot()
+ await _on_screenshot(image_data, "screenshot_before")
- # Prepare API call kwargs
- api_kwargs = {
- "model": model,
- "messages": litellm_messages,
- "max_tokens": kwargs.get("max_tokens", 500),
- "temperature": kwargs.get("temperature", 0.0),
- "do_sample": kwargs.get("temperature", 0.0) > 0.0,
- "num_retries": max_retries,
- **{k: v for k, v in kwargs.items() if k not in ["max_tokens", "temperature"]}
- }
-
- # Call API start hook
- if _on_api_start:
- await _on_api_start(api_kwargs)
-
- # Call liteLLM with UITARS model
- response = await litellm.acompletion(**api_kwargs)
-
- # Call API end hook
- if _on_api_end:
- await _on_api_end(api_kwargs, response)
-
- # Extract response content
- response_content = response.choices[0].message.content.strip() # type: ignore
-
- # Parse UITARS response
- parsed_responses = parse_uitars_response(response_content, original_width, original_height)
-
- # Convert to computer actions
- computer_actions = convert_to_computer_actions(parsed_responses, original_width, original_height)
-
- # Add computer actions to response items
- thought = parsed_responses[0].get("thought", "")
- if thought:
- response_items.append(make_reasoning_item(thought))
- response_items.extend(computer_actions)
-
- # Extract usage information
- response_usage = {
- **LiteLLMCompletionResponsesConfig._transform_chat_completion_usage_to_responses_usage(response.usage).model_dump(),
- "response_cost": response._hidden_params.get("response_cost", 0.0),
- }
- if _on_usage:
- await _on_usage(response_usage)
+ # Add screenshot to output items so it can be retained in history
+ response_items.append(make_input_image_item(image_data))
+ else:
+ raise ValueError("No screenshot found in messages and no computer_handler provided")
+ processed_image, original_width, original_height = process_image_for_uitars(image_data)
+ encoded_image = pil_to_base64(processed_image)
+
+ # Add conversation history
+ if history_messages:
+ litellm_messages.extend(history_messages)
+ else:
+ litellm_messages.append({
+ "role": "user",
+ "content": [
+ {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{encoded_image}"}}
+ ]
+ })
- # Create agent response
- agent_response = {
- "output": response_items,
- "usage": response_usage
- }
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": litellm_messages,
+ "max_tokens": kwargs.get("max_tokens", 500),
+ "temperature": kwargs.get("temperature", 0.0),
+ "do_sample": kwargs.get("temperature", 0.0) > 0.0,
+ "num_retries": max_retries,
+ **{k: v for k, v in kwargs.items() if k not in ["max_tokens", "temperature"]}
+ }
+
+ # Call API start hook
+ if _on_api_start:
+ await _on_api_start(api_kwargs)
+
+ # Call liteLLM with UITARS model
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Call API end hook
+ if _on_api_end:
+ await _on_api_end(api_kwargs, response)
+
+ # Extract response content
+ response_content = response.choices[0].message.content.strip() # type: ignore
+
+ # Parse UITARS response
+ parsed_responses = parse_uitars_response(response_content, original_width, original_height)
+
+ # Convert to computer actions
+ computer_actions = convert_to_computer_actions(parsed_responses, original_width, original_height)
+
+ # Add computer actions to response items
+ thought = parsed_responses[0].get("thought", "")
+ if thought:
+ response_items.append(make_reasoning_item(thought))
+ response_items.extend(computer_actions)
+
+ # Extract usage information
+ response_usage = {
+ **LiteLLMCompletionResponsesConfig._transform_chat_completion_usage_to_responses_usage(response.usage).model_dump(),
+ "response_cost": response._hidden_params.get("response_cost", 0.0),
+ }
+ if _on_usage:
+ await _on_usage(response_usage)
+
+ # Create agent response
+ agent_response = {
+ "output": response_items,
+ "usage": response_usage
+ }
+
+ return agent_response
- return agent_response
\ No newline at end of file
+ async def predict_click(
+ self,
+ model: str,
+ image_b64: str,
+ instruction: str
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates based on image and instruction.
+
+ UITARS supports click prediction through its action parsing.
+
+ Args:
+ model: Model name to use
+ image_b64: Base64 encoded image
+ instruction: Instruction for where to click
+
+ Returns:
+ Tuple with (x, y) coordinates or None
+ """
+ try:
+ # Create prompt using grounding template
+ user_prompt = GROUNDING_UITARS_PROMPT_TEMPLATE.format(
+ instruction=instruction
+ )
+
+ # Process image for UITARS
+ processed_image, original_width, original_height = process_image_for_uitars(image_b64)
+ encoded_image = pil_to_base64(processed_image)
+
+ # Prepare messages for liteLLM
+ litellm_messages = [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant."
+ },
+ {
+ "role": "user",
+ "content": [
+ {"type": "text", "text": user_prompt},
+ {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{encoded_image}"}}
+ ]
+ }
+ ]
+
+ # Prepare API call kwargs
+ api_kwargs = {
+ "model": model,
+ "messages": litellm_messages,
+ "max_tokens": 100,
+ "temperature": 0.0,
+ "do_sample": False
+ }
+
+ # Call liteLLM with UITARS model
+ response = await litellm.acompletion(**api_kwargs)
+
+ # Extract response content
+ response_content = response.choices[0].message.content.strip() # type: ignore
+
+ # Parse the response to extract click coordinates
+ # Look for click action with coordinates
+ click_pattern = r"click\(point='<\|box_start\|>\((\d+),(\d+)\)<\|box_end\|>'\)"
+ match = re.search(click_pattern, response_content)
+
+ if match:
+ x, y = int(match.group(1)), int(match.group(2))
+ # Scale coordinates back to original image dimensions
+ scale_x = original_width / processed_image.width
+ scale_y = original_height / processed_image.height
+
+ scaled_x = int(x * scale_x)
+ scaled_y = int(y * scale_y)
+
+ return (scaled_x, scaled_y)
+
+ return None
+
+ except Exception as e:
+ # Log error and return None
+ print(f"Error in predict_click: {e}")
+ return None
+
+ def get_capabilities(self) -> List[AgentCapability]:
+ """
+ Get list of capabilities supported by this agent config.
+
+ Returns:
+ List of capability strings
+ """
+ return ["step", "click"]
\ No newline at end of file
diff --git a/libs/python/agent/agent/responses.py b/libs/python/agent/agent/responses.py
index 2d7e85d0..fb034a70 100644
--- a/libs/python/agent/agent/responses.py
+++ b/libs/python/agent/agent/responses.py
@@ -40,7 +40,7 @@ def make_input_image_item(image_data: Union[str, bytes]) -> EasyInputMessagePara
ResponseInputImageParam(
type="input_image",
image_url=f"data:image/png;base64,{base64.b64encode(image_data).decode('utf-8') if isinstance(image_data, bytes) else image_data}"
- )
+ ) # type: ignore
],
role="user",
type="message"
@@ -205,3 +205,524 @@ def make_wait_item(call_id: Optional[str] = None) -> ResponseComputerToolCallPar
status="completed",
type="computer_call"
)
+
+# Extra anthropic computer calls
+def make_left_mouse_down_item(x: Optional[int] = None, y: Optional[int] = None, call_id: Optional[str] = None) -> Dict[str, Any]:
+ return {
+ "id": random_id(),
+ "call_id": call_id if call_id else random_id(),
+ "action": {
+ "type": "left_mouse_down",
+ "x": x,
+ "y": y
+ },
+ "pending_safety_checks": [],
+ "status": "completed",
+ "type": "computer_call"
+ }
+
+def make_left_mouse_up_item(x: Optional[int] = None, y: Optional[int] = None, call_id: Optional[str] = None) -> Dict[str, Any]:
+ return {
+ "id": random_id(),
+ "call_id": call_id if call_id else random_id(),
+ "action": {
+ "type": "left_mouse_up",
+ "x": x,
+ "y": y
+ },
+ "pending_safety_checks": [],
+ "status": "completed",
+ "type": "computer_call"
+ }
+
+def make_failed_tool_call_items(tool_name: str, tool_kwargs: Dict[str, Any], error_message: str, call_id: Optional[str] = None) -> List[Dict[str, Any]]:
+ call_id = call_id if call_id else random_id()
+ return [
+ {
+ "type": "function_call",
+ "id": random_id(),
+ "call_id": call_id,
+ "name": tool_name,
+ "arguments": json.dumps(tool_kwargs),
+ },
+ {
+ "type": "function_call_output",
+ "call_id": call_id,
+ "output": json.dumps({"error": error_message}),
+ }
+ ]
+
+# Conversion functions between element descriptions and coordinates
+def convert_computer_calls_desc2xy(responses_items: List[Dict[str, Any]], desc2xy: Dict[str, tuple]) -> List[Dict[str, Any]]:
+ """
+ Convert computer calls from element descriptions to x,y coordinates.
+
+ Args:
+ responses_items: List of response items containing computer calls with element_description
+ desc2xy: Dictionary mapping element descriptions to (x, y) coordinate tuples
+
+ Returns:
+ List of response items with element_description replaced by x,y coordinates
+ """
+ converted_items = []
+
+ for item in responses_items:
+ if item.get("type") == "computer_call" and "action" in item:
+ action = item["action"].copy()
+
+ # Handle single element_description
+ if "element_description" in action:
+ desc = action["element_description"]
+ if desc in desc2xy:
+ x, y = desc2xy[desc]
+ action["x"] = x
+ action["y"] = y
+ del action["element_description"]
+
+ # Handle start_element_description and end_element_description for drag operations
+ elif "start_element_description" in action and "end_element_description" in action:
+ start_desc = action["start_element_description"]
+ end_desc = action["end_element_description"]
+
+ if start_desc in desc2xy and end_desc in desc2xy:
+ start_x, start_y = desc2xy[start_desc]
+ end_x, end_y = desc2xy[end_desc]
+ action["path"] = [{"x": start_x, "y": start_y}, {"x": end_x, "y": end_y}]
+ del action["start_element_description"]
+ del action["end_element_description"]
+
+ converted_item = item.copy()
+ converted_item["action"] = action
+ converted_items.append(converted_item)
+ else:
+ converted_items.append(item)
+
+ return converted_items
+
+
+def convert_computer_calls_xy2desc(responses_items: List[Dict[str, Any]], desc2xy: Dict[str, tuple]) -> List[Dict[str, Any]]:
+ """
+ Convert computer calls from x,y coordinates to element descriptions.
+
+ Args:
+ responses_items: List of response items containing computer calls with x,y coordinates
+ desc2xy: Dictionary mapping element descriptions to (x, y) coordinate tuples
+
+ Returns:
+ List of response items with x,y coordinates replaced by element_description
+ """
+ # Create reverse mapping from coordinates to descriptions
+ xy2desc = {coords: desc for desc, coords in desc2xy.items()}
+
+ converted_items = []
+
+ for item in responses_items:
+ if item.get("type") == "computer_call" and "action" in item:
+ action = item["action"].copy()
+
+ # Handle single x,y coordinates
+ if "x" in action and "y" in action:
+ coords = (action["x"], action["y"])
+ if coords in xy2desc:
+ action["element_description"] = xy2desc[coords]
+ del action["x"]
+ del action["y"]
+
+ # Handle path for drag operations
+ elif "path" in action and isinstance(action["path"], list) and len(action["path"]) == 2:
+ start_point = action["path"][0]
+ end_point = action["path"][1]
+
+ if ("x" in start_point and "y" in start_point and
+ "x" in end_point and "y" in end_point):
+
+ start_coords = (start_point["x"], start_point["y"])
+ end_coords = (end_point["x"], end_point["y"])
+
+ if start_coords in xy2desc and end_coords in xy2desc:
+ action["start_element_description"] = xy2desc[start_coords]
+ action["end_element_description"] = xy2desc[end_coords]
+ del action["path"]
+
+ converted_item = item.copy()
+ converted_item["action"] = action
+ converted_items.append(converted_item)
+ else:
+ converted_items.append(item)
+
+ return converted_items
+
+
+def get_all_element_descriptions(responses_items: List[Dict[str, Any]]) -> List[str]:
+ """
+ Extract all element descriptions from computer calls in responses items.
+
+ Args:
+ responses_items: List of response items containing computer calls
+
+ Returns:
+ List of unique element descriptions found in computer calls
+ """
+ descriptions = set()
+
+ for item in responses_items:
+ if item.get("type") == "computer_call" and "action" in item:
+ action = item["action"]
+
+ # Handle single element_description
+ if "element_description" in action:
+ descriptions.add(action["element_description"])
+
+ # Handle start_element_description and end_element_description for drag operations
+ if "start_element_description" in action:
+ descriptions.add(action["start_element_description"])
+
+ if "end_element_description" in action:
+ descriptions.add(action["end_element_description"])
+
+ return list(descriptions)
+
+
+# Conversion functions between responses_items and completion messages formats
+def convert_responses_items_to_completion_messages(messages: List[Dict[str, Any]], allow_images_in_tool_results: bool = True) -> List[Dict[str, Any]]:
+ """Convert responses_items message format to liteLLM completion format.
+
+ Args:
+ messages: List of responses_items format messages
+ allow_images_in_tool_results: If True, include images in tool role messages.
+ If False, send tool message + separate user message with image.
+ """
+ completion_messages = []
+
+ for message in messages:
+ msg_type = message.get("type")
+ role = message.get("role")
+
+ # Handle user messages (both with and without explicit type)
+ if role == "user" or msg_type == "user":
+ content = message.get("content", "")
+ if isinstance(content, list):
+ # Handle list content (images, text blocks)
+ completion_content = []
+ for item in content:
+ if item.get("type") == "input_image":
+ completion_content.append({
+ "type": "image_url",
+ "image_url": {
+ "url": item.get("image_url")
+ }
+ })
+ elif item.get("type") == "input_text":
+ completion_content.append({
+ "type": "text",
+ "text": item.get("text")
+ })
+ elif item.get("type") == "text":
+ completion_content.append({
+ "type": "text",
+ "text": item.get("text")
+ })
+
+ completion_messages.append({
+ "role": "user",
+ "content": completion_content
+ })
+ elif isinstance(content, str):
+ # Handle string content
+ completion_messages.append({
+ "role": "user",
+ "content": content
+ })
+
+ # Handle assistant messages
+ elif role == "assistant" or msg_type == "message":
+ content = message.get("content", [])
+ if isinstance(content, list):
+ text_parts = []
+ for item in content:
+ if item.get("type") == "output_text":
+ text_parts.append(item.get("text", ""))
+ elif item.get("type") == "text":
+ text_parts.append(item.get("text", ""))
+
+ if text_parts:
+ completion_messages.append({
+ "role": "assistant",
+ "content": "\n".join(text_parts)
+ })
+
+ # Handle reasoning items (convert to assistant message)
+ elif msg_type == "reasoning":
+ summary = message.get("summary", [])
+ text_parts = []
+ for item in summary:
+ if item.get("type") == "summary_text":
+ text_parts.append(item.get("text", ""))
+
+ if text_parts:
+ completion_messages.append({
+ "role": "assistant",
+ "content": "\n".join(text_parts)
+ })
+
+ # Handle function calls
+ elif msg_type == "function_call":
+ # Add tool call to last assistant message or create new one
+ if not completion_messages or completion_messages[-1]["role"] != "assistant":
+ completion_messages.append({
+ "role": "assistant",
+ "content": "",
+ "tool_calls": []
+ })
+
+ if "tool_calls" not in completion_messages[-1]:
+ completion_messages[-1]["tool_calls"] = []
+
+ completion_messages[-1]["tool_calls"].append({
+ "id": message.get("call_id"),
+ "type": "function",
+ "function": {
+ "name": message.get("name"),
+ "arguments": message.get("arguments")
+ }
+ })
+
+ # Handle computer calls
+ elif msg_type == "computer_call":
+ # Add tool call to last assistant message or create new one
+ if not completion_messages or completion_messages[-1]["role"] != "assistant":
+ completion_messages.append({
+ "role": "assistant",
+ "content": "",
+ "tool_calls": []
+ })
+
+ if "tool_calls" not in completion_messages[-1]:
+ completion_messages[-1]["tool_calls"] = []
+
+ action = message.get("action", {})
+ completion_messages[-1]["tool_calls"].append({
+ "id": message.get("call_id"),
+ "type": "function",
+ "function": {
+ "name": "computer",
+ "arguments": json.dumps(action)
+ }
+ })
+
+ # Handle function/computer call outputs
+ elif msg_type in ["function_call_output", "computer_call_output"]:
+ output = message.get("output")
+ call_id = message.get("call_id")
+
+ if isinstance(output, dict) and output.get("type") == "input_image":
+ if allow_images_in_tool_results:
+ # Handle image output as tool response (may not work with all APIs)
+ completion_messages.append({
+ "role": "tool",
+ "tool_call_id": call_id,
+ "content": [{
+ "type": "image_url",
+ "image_url": {
+ "url": output.get("image_url")
+ }
+ }]
+ })
+ else:
+ # Send tool message + separate user message with image (OpenAI compatible)
+ completion_messages += [{
+ "role": "tool",
+ "tool_call_id": call_id,
+ "content": "[Execution completed. See screenshot below]"
+ }, {
+ "role": "user",
+ "content": [{
+ "type": "image_url",
+ "image_url": {
+ "url": output.get("image_url")
+ }
+ }]
+ }]
+ else:
+ # Handle text output as tool response
+ completion_messages.append({
+ "role": "tool",
+ "tool_call_id": call_id,
+ "content": str(output)
+ })
+
+ return completion_messages
+
+
+def convert_completion_messages_to_responses_items(completion_messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
+ """Convert completion messages format to responses_items message format."""
+ responses_items = []
+ skip_next = False
+
+ for i, message in enumerate(completion_messages):
+ if skip_next:
+ skip_next = False
+ continue
+
+ role = message.get("role")
+ content = message.get("content")
+ tool_calls = message.get("tool_calls", [])
+
+ # Handle assistant messages with text content
+ if role == "assistant" and content and isinstance(content, str):
+ responses_items.append({
+ "type": "message",
+ "role": "assistant",
+ "content": [{
+ "type": "output_text",
+ "text": content
+ }]
+ })
+
+ # Handle tool calls
+ if tool_calls:
+ for tool_call in tool_calls:
+ if tool_call.get("type") == "function":
+ function = tool_call.get("function", {})
+ function_name = function.get("name")
+
+ if function_name == "computer":
+ # Parse computer action
+ try:
+ action = json.loads(function.get("arguments", "{}"))
+ # Change key from "action" -> "type"
+ if action.get("action"):
+ action["type"] = action["action"]
+ del action["action"]
+ responses_items.append({
+ "type": "computer_call",
+ "call_id": tool_call.get("id"),
+ "action": action,
+ "status": "completed"
+ })
+ except json.JSONDecodeError:
+ # Fallback to function call format
+ responses_items.append({
+ "type": "function_call",
+ "call_id": tool_call.get("id"),
+ "name": function_name,
+ "arguments": function.get("arguments", "{}"),
+ "status": "completed"
+ })
+ else:
+ # Regular function call
+ responses_items.append({
+ "type": "function_call",
+ "call_id": tool_call.get("id"),
+ "name": function_name,
+ "arguments": function.get("arguments", "{}"),
+ "status": "completed"
+ })
+
+ # Handle tool messages (function/computer call outputs)
+ elif role == "tool" and content:
+ tool_call_id = message.get("tool_call_id")
+ if isinstance(content, str):
+ # Check if this is the "[Execution completed. See screenshot below]" pattern
+ if content == "[Execution completed. See screenshot below]":
+ # Look ahead for the next user message with image
+ next_idx = i + 1
+ if (next_idx < len(completion_messages) and
+ completion_messages[next_idx].get("role") == "user" and
+ isinstance(completion_messages[next_idx].get("content"), list)):
+ # Found the pattern - extract image from next message
+ next_content = completion_messages[next_idx]["content"]
+ for item in next_content:
+ if item.get("type") == "image_url":
+ responses_items.append({
+ "type": "computer_call_output",
+ "call_id": tool_call_id,
+ "output": {
+ "type": "input_image",
+ "image_url": item.get("image_url", {}).get("url")
+ }
+ })
+ # Skip the next user message since we processed it
+ skip_next = True
+ break
+ else:
+ # No matching user message, treat as regular text
+ responses_items.append({
+ "type": "computer_call_output",
+ "call_id": tool_call_id,
+ "output": content
+ })
+ else:
+ # Determine if this is a computer call or function call output
+ try:
+ # Try to parse as structured output
+ parsed_content = json.loads(content)
+ if parsed_content.get("type") == "input_image":
+ responses_items.append({
+ "type": "computer_call_output",
+ "call_id": tool_call_id,
+ "output": parsed_content
+ })
+ else:
+ responses_items.append({
+ "type": "computer_call_output",
+ "call_id": tool_call_id,
+ "output": content
+ })
+ except json.JSONDecodeError:
+ # Plain text output - could be function or computer call
+ responses_items.append({
+ "type": "function_call_output",
+ "call_id": tool_call_id,
+ "output": content
+ })
+ elif isinstance(content, list):
+ # Handle structured content (e.g., images)
+ for item in content:
+ if item.get("type") == "image_url":
+ responses_items.append({
+ "type": "computer_call_output",
+ "call_id": tool_call_id,
+ "output": {
+ "type": "input_image",
+ "image_url": item.get("image_url", {}).get("url")
+ }
+ })
+ elif item.get("type") == "text":
+ responses_items.append({
+ "type": "function_call_output",
+ "call_id": tool_call_id,
+ "output": item.get("text")
+ })
+
+ # Handle actual user messages
+ elif role == "user" and content:
+ if isinstance(content, list):
+ # Handle structured user content (e.g., text + images)
+ user_content = []
+ for item in content:
+ if item.get("type") == "image_url":
+ user_content.append({
+ "type": "input_image",
+ "image_url": item.get("image_url", {}).get("url")
+ })
+ elif item.get("type") == "text":
+ user_content.append({
+ "type": "input_text",
+ "text": item.get("text")
+ })
+
+ if user_content:
+ responses_items.append({
+ "role": "user",
+ "type": "message",
+ "content": user_content
+ })
+ elif isinstance(content, str):
+ # Handle simple text user message
+ responses_items.append({
+ "role": "user",
+ "content": content
+ })
+
+ return responses_items
diff --git a/libs/python/agent/agent/types.py b/libs/python/agent/agent/types.py
index 2b07a6cf..23946c86 100644
--- a/libs/python/agent/agent/types.py
+++ b/libs/python/agent/agent/types.py
@@ -9,71 +9,21 @@ from litellm import ResponseInputParam, ResponsesAPIResponse, ToolParam
from collections.abc import Iterable
# Agent input types
-Messages = str | ResponseInputParam
+Messages = str | ResponseInputParam | List[Dict[str, Any]]
Tools = Optional[Iterable[ToolParam]]
# Agent output types
AgentResponse = ResponsesAPIResponse
+AgentCapability = Literal["step", "click"]
-# Agent loop registration
-class AgentLoopInfo(BaseModel):
- """Information about a registered agent loop"""
- func: Callable
+
+# Agent config registration
+class AgentConfigInfo(BaseModel):
+ """Information about a registered agent config"""
+ agent_class: type
models_regex: str
priority: int = 0
def matches_model(self, model: str) -> bool:
- """Check if this loop matches the given model"""
+ """Check if this agent config matches the given model"""
return bool(re.match(self.models_regex, model))
-
-# Computer tool interface
-class Computer(Protocol):
- """Protocol defining the interface for computer interactions."""
-
- async def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]:
- """Get the current environment type."""
- ...
-
- async def get_dimensions(self) -> tuple[int, int]:
- """Get screen dimensions as (width, height)."""
- ...
-
- async def screenshot(self) -> str:
- """Take a screenshot and return as base64 string."""
- ...
-
- async def click(self, x: int, y: int, button: str = "left") -> None:
- """Click at coordinates with specified button."""
- ...
-
- async def double_click(self, x: int, y: int) -> None:
- """Double click at coordinates."""
- ...
-
- async def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
- """Scroll at coordinates with specified scroll amounts."""
- ...
-
- async def type(self, text: str) -> None:
- """Type text."""
- ...
-
- async def wait(self, ms: int = 1000) -> None:
- """Wait for specified milliseconds."""
- ...
-
- async def move(self, x: int, y: int) -> None:
- """Move cursor to coordinates."""
- ...
-
- async def keypress(self, keys: List[str]) -> None:
- """Press key combination."""
- ...
-
- async def drag(self, path: List[Dict[str, int]]) -> None:
- """Drag along specified path."""
- ...
-
- async def get_current_url(self) -> str:
- """Get current URL (for browser environments)."""
- ...
diff --git a/libs/python/agent/agent/ui/gradio/app.py b/libs/python/agent/agent/ui/gradio/app.py
index 13c0786f..be04d931 100644
--- a/libs/python/agent/agent/ui/gradio/app.py
+++ b/libs/python/agent/agent/ui/gradio/app.py
@@ -178,13 +178,20 @@ def create_computer_instance(
"""Create or get the global Computer instance."""
global global_computer
if global_computer is None:
- global_computer = Computer(
- verbosity=verbosity,
- os_type=os_type,
- provider_type=provider_type,
- name=name if name else "",
- api_key=api_key
- )
+ if provider_type == "localhost":
+ global_computer = Computer(
+ verbosity=verbosity,
+ os_type=os_type,
+ use_host_computer_server=True
+ )
+ else:
+ global_computer = Computer(
+ verbosity=verbosity,
+ os_type=os_type,
+ provider_type=provider_type,
+ name=name if name else "",
+ api_key=api_key
+ )
return global_computer
diff --git a/libs/python/agent/agent/ui/gradio/ui_components.py b/libs/python/agent/agent/ui/gradio/ui_components.py
index dfcceb4e..c601fb6c 100644
--- a/libs/python/agent/agent/ui/gradio/ui_components.py
+++ b/libs/python/agent/agent/ui/gradio/ui_components.py
@@ -211,7 +211,7 @@ if __name__ == "__main__":
is_windows = platform.system().lower() == "windows"
is_mac = platform.system().lower() == "darwin"
- providers = ["cloud"]
+ providers = ["cloud", "localhost"]
if is_mac:
providers += ["lume"]
if is_windows:
@@ -403,6 +403,23 @@ if __name__ == "__main__":
type="password",
)
+ # Provider visibility update function
+ def update_provider_visibility(provider):
+ """Update visibility of container name and API key based on selected provider."""
+ is_localhost = provider == "localhost"
+ return [
+ gr.update(visible=not is_localhost), # container_name
+ gr.update(visible=not is_localhost and not has_cua_key) # cua_cloud_api_key
+ ]
+
+ # Connect provider change event
+ computer_provider.change(
+ fn=update_provider_visibility,
+ inputs=[computer_provider],
+ outputs=[container_name, cua_cloud_api_key],
+ queue=False
+ )
+
# Connect UI update events
for dropdown in [agent_loop, omni_model_choice, uitars_model_choice, openai_model_choice, anthropic_model_choice]:
dropdown.change(
diff --git a/libs/python/agent/benchmarks/.gitignore b/libs/python/agent/benchmarks/.gitignore
new file mode 100644
index 00000000..a0aed392
--- /dev/null
+++ b/libs/python/agent/benchmarks/.gitignore
@@ -0,0 +1,3 @@
+output/
+interactive_output/
+*_results.md
\ No newline at end of file
diff --git a/libs/python/agent/benchmarks/README.md b/libs/python/agent/benchmarks/README.md
new file mode 100644
index 00000000..03d1a789
--- /dev/null
+++ b/libs/python/agent/benchmarks/README.md
@@ -0,0 +1,68 @@
+# Computer Agent Benchmarks
+
+This directory contains benchmarks designed to test agent providers in the Computer Agent SDK against reference agent implementations.
+
+## Overview
+
+The benchmark system evaluates models on GUI grounding tasks, specifically click prediction accuracy. It supports both:
+- **Computer Agent SDK providers** (using model strings like `"huggingface-local/HelloKKMe/GTA1-7B"`)
+- **Reference agent implementations** (custom model classes implementing the `ModelProtocol`)
+
+## Available Benchmarks
+
+### 1. ScreenSpot-v2 (`ss-v2.py`)
+- **Dataset**: ScreenSpot-v2 (click-only GUI grounding)
+- **Format**: Standard resolution screenshots
+- **Task**: Predict click coordinates given an instruction and image
+- **Metrics**: Accuracy, Error Rate, Timing, VRAM usage
+
+### 2. ScreenSpot-Pro (`ss-pro.py`)
+- **Dataset**: ScreenSpot-Pro (high-resolution click-only GUI grounding)
+- **Format**: High-resolution screenshots
+- **Task**: Predict click coordinates given an instruction and image
+- **Metrics**: Accuracy, Error Rate, Timing, VRAM usage
+
+### 3. Interactive Testing (`interactive.py`)
+- **Real-time testing**: Take screenshots and visualize model predictions
+- **Commands**:
+ - Type instruction → test all models on last screenshot
+ - `screenshot` → take screenshot
+ - `models` → list available models
+ - `quit`/`exit` → exit tool
+- **Output**: Visual predictions with crosshairs for each model
+
+## Running Benchmarks
+
+### 1. Configure Models
+Edit `utils.py` to specify which models you want to test in `get_available_models()`.
+
+### 2. Run Benchmark
+```bash
+# ScreenSpot-v2 benchmark
+python ss-v2.py --samples 50
+
+# ScreenSpot-Pro benchmark
+python ss-pro.py --samples 50
+
+# Interactive testing
+python interactive.py
+```
+
+## Output
+
+### Console Output
+```
+Model Results:
+ Accuracy: 85.50% (171/200)
+ Avg Time: 1.23s (0.89s - 2.45s)
+ VRAM Usage: 4.5GB (max) / 3.4GB (avg)
+```
+
+### Generated Files
+- **Markdown Report**: `*_results.md` with detailed results tables
+- **Visualizations**: `output/` directory with prediction visualizations
+- **Interactive Output**: `interactive_output/` for interactive session results
+
+## Contributing
+
+To add a new reference model, follow the instructions in [contrib.md](contrib.md).
\ No newline at end of file
diff --git a/libs/python/agent/benchmarks/contrib.md b/libs/python/agent/benchmarks/contrib.md
new file mode 100644
index 00000000..0bef9077
--- /dev/null
+++ b/libs/python/agent/benchmarks/contrib.md
@@ -0,0 +1,163 @@
+# Contributing Reference Agent Implementations
+
+This guide explains how to add your own reference agent implementations to the benchmark system.
+
+## Adding Reference Agent Implementations
+
+### 1. Implement the ModelProtocol
+
+Create a new file in `models/` directory implementing the `ModelProtocol`:
+
+```python
+from models.base import ModelProtocol
+from typing import Optional, Tuple
+from PIL import Image
+
+class YourModelName(ModelProtocol):
+ def __init__(self, model_path: str):
+ self.model_path = model_path
+ self._model = None
+
+ @property
+ def model_name(self) -> str:
+ return self.model_path
+
+ async def load_model(self) -> None:
+ """Load the model into memory."""
+ # Your model loading logic here
+ pass
+
+ async def unload_model(self) -> None:
+ """Unload the model from memory."""
+ # Your model cleanup logic here
+ pass
+
+ async def predict_click(self, image: Image.Image, instruction: str) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates for the given image and instruction.
+
+ Args:
+ image: PIL Image to analyze
+ instruction: Text instruction describing what to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ # Your prediction logic here
+ return (x, y) # Return predicted coordinates
+```
+
+### 2. Register Your Model
+
+Add your model to the `get_available_models()` function in `utils.py`:
+
+```python
+def get_available_models() -> List[Union[str, ModelProtocol]]:
+ models = [
+ # Computer Agent SDK providers
+ "huggingface-local/HelloKKMe/GTA1-7B",
+
+ # Reference implementations
+ GTA1Model("HelloKKMe/GTA1-7B"),
+ YourModelName("path/to/your/model"), # Add your model here
+ ]
+ return models
+```
+
+### 3. Test Your Implementation
+
+Before submitting, test your model with the interactive tool:
+
+```bash
+python interactive.py
+```
+
+This will help you verify that your model loads correctly and produces reasonable predictions.
+
+## Example: Adding a New Model
+
+Here's a complete example of adding a hypothetical "MyVisionModel":
+
+1. **Create `models/my_vision_model.py`:**
+```python
+import torch
+from transformers import AutoModel, AutoProcessor
+from models.base import ModelProtocol
+from typing import Optional, Tuple
+from PIL import Image
+
+class MyVisionModel(ModelProtocol):
+ def __init__(self, model_path: str):
+ self.model_path = model_path
+ self.model = None
+ self.processor = None
+
+ @property
+ def model_name(self) -> str:
+ return f"MyVisionModel({self.model_path})"
+
+ async def load_model(self) -> None:
+ """Load the model and processor."""
+ self.processor = AutoProcessor.from_pretrained(self.model_path)
+ self.model = AutoModel.from_pretrained(
+ self.model_path,
+ torch_dtype=torch.float16,
+ device_map="auto"
+ )
+
+ async def unload_model(self) -> None:
+ """Clean up model resources."""
+ del self.model
+ del self.processor
+ self.model = None
+ self.processor = None
+ torch.cuda.empty_cache()
+
+ async def predict_click(self, image: Image.Image, instruction: str) -> Optional[Tuple[int, int]]:
+ """Predict click coordinates."""
+ try:
+ # Preprocess inputs
+ inputs = self.processor(
+ text=instruction,
+ images=image,
+ return_tensors="pt"
+ )
+
+ # Run inference
+ with torch.no_grad():
+ outputs = self.model(**inputs)
+
+ # Extract coordinates (model-specific logic)
+ x, y = self._extract_coordinates(outputs)
+ return (int(x), int(y))
+
+ except Exception as e:
+ print(f"Prediction failed: {e}")
+ return None
+
+ def _extract_coordinates(self, outputs):
+ """Extract x, y coordinates from model outputs."""
+ # Your model-specific coordinate extraction logic
+ pass
+```
+
+2. **Update `models/__init__.py`:**
+```python
+from .gta1 import GTA1Model
+from .my_vision_model import MyVisionModel
+
+__all__ = ["GTA1Model", "MyVisionModel"]
+```
+
+3. **Update `utils.py`:**
+```python
+from models import GTA1Model, MyVisionModel
+
+def get_available_models() -> List[Union[str, ModelProtocol]]:
+ models = [
+ "huggingface-local/HelloKKMe/GTA1-7B",
+ GTA1Model("HelloKKMe/GTA1-7B"),
+ MyVisionModel("my-org/my-vision-model"), # Add here
+ ]
+ return models
+```
diff --git a/libs/python/agent/benchmarks/interactive.py b/libs/python/agent/benchmarks/interactive.py
new file mode 100644
index 00000000..6d0aba82
--- /dev/null
+++ b/libs/python/agent/benchmarks/interactive.py
@@ -0,0 +1,201 @@
+#!/usr/bin/env python3
+"""
+Interactive Click Prediction Tool
+
+Takes screenshots and allows testing multiple models interactively.
+Models are loaded/unloaded one at a time to avoid memory issues.
+"""
+
+import asyncio
+import os
+from datetime import datetime
+from typing import List, Dict, Any
+
+from utils import (
+ ModelWrapper,
+ take_screenshot,
+ save_prediction_visualization,
+ get_available_models
+)
+
+
+async def predict_with_all_models(image, instruction: str, models) -> List[Dict[str, Any]]:
+ """
+ Predict click coordinates with all models sequentially.
+
+ Args:
+ image: PIL Image to analyze
+ instruction: Instruction text
+ models: List of model instances
+
+ Returns:
+ List of prediction results
+ """
+ predictions = []
+
+ for model in models:
+ model_wrapper = ModelWrapper(model)
+ print(f"\n🔄 Loading {model_wrapper.model_name}...")
+
+ try:
+ # Load model
+ await model_wrapper.load_model()
+
+ # Predict
+ coords = await model_wrapper.predict_click(image, instruction)
+
+ predictions.append({
+ 'model_name': model_wrapper.model_name,
+ 'coords': coords,
+ 'error': None
+ })
+
+ if coords:
+ print(f"✅ {model_wrapper.model_name}: ({coords[0]}, {coords[1]})")
+ else:
+ print(f"❌ {model_wrapper.model_name}: No prediction")
+
+ except Exception as e:
+ print(f"❌ {model_wrapper.model_name}: ERROR - {str(e)}")
+ predictions.append({
+ 'model_name': model_wrapper.model_name,
+ 'coords': None,
+ 'error': str(e)
+ })
+
+ finally:
+ # Always unload model to free memory
+ try:
+ await model_wrapper.unload_model()
+ print(f"🗑️ Unloaded {model_wrapper.model_name}")
+ except Exception as e:
+ print(f"⚠️ Error unloading {model_wrapper.model_name}: {e}")
+
+ return predictions
+
+
+def print_header():
+ """Print the interactive tool header."""
+ print("=" * 60)
+ print("🖱️ Interactive Click Prediction Tool")
+ print("=" * 60)
+ print("Commands:")
+ print(" • Type an instruction to test models on last screenshot")
+ print(" • 'screenshot' - Take a new screenshot")
+ print(" • 'models' - List available models")
+ print(" • 'quit' or 'exit' - Exit the tool")
+ print("=" * 60)
+ print("💡 Tip: Take a screenshot first, then send instructions to test models!")
+
+
+def print_models(models):
+ """Print available models."""
+ print("\n📋 Available Models:")
+ for i, model in enumerate(models, 1):
+ if isinstance(model, str):
+ print(f" {i}. {model}")
+ else:
+ print(f" {i}. models.{model.__class__.__name__}")
+
+
+async def main():
+ """
+ Main interactive loop.
+ """
+ print_header()
+
+ # Get available models
+ models = get_available_models()
+ print_models(models)
+
+ # Create output directory for visualizations
+ output_dir = "interactive_output"
+ os.makedirs(output_dir, exist_ok=True)
+
+ session_count = 0
+ last_screenshot = None
+ screenshot_timestamp = None
+
+ while True:
+ try:
+ # Get user input
+ print(f"\n{'='*40}")
+ user_input = input("🎯 Enter instruction (or command): ").strip()
+
+ if not user_input:
+ continue
+
+ # Handle commands
+ if user_input.lower() in ['quit', 'exit', 'q']:
+ print("👋 Goodbye!")
+ break
+
+ elif user_input.lower() == 'models':
+ print_models(models)
+ continue
+
+ elif user_input.lower() == 'screenshot':
+ print("📸 Taking screenshot...")
+ try:
+ last_screenshot = take_screenshot()
+ screenshot_timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+ screenshot_path = os.path.join(output_dir, f"screenshot_{screenshot_timestamp}.png")
+ last_screenshot.save(screenshot_path)
+ print(f"✅ Screenshot captured and saved to: {screenshot_path}")
+ print(f"📝 Ready for instructions! Screenshot size: {last_screenshot.size}")
+ except Exception as e:
+ print(f"❌ Error taking screenshot: {e}")
+ continue
+
+ # Handle instruction input
+ if last_screenshot is None:
+ print("⚠️ No screenshot available! Please take a screenshot first using 'screenshot' command.")
+ continue
+
+ session_count += 1
+ print(f"\n🎯 Session {session_count}: '{user_input}'")
+ print(f"📷 Using screenshot from: {screenshot_timestamp}")
+
+ # Predict with all models using last screenshot
+ print(f"\n🤖 Testing {len(models)} models on screenshot...")
+ predictions = await predict_with_all_models(last_screenshot, user_input, models)
+
+ # Display results summary
+ print(f"\n📊 Results Summary:")
+ print("-" * 50)
+ for pred in predictions:
+ if pred['coords']:
+ print(f"✅ {pred['model_name']}: ({pred['coords'][0]}, {pred['coords'][1]})")
+ elif pred['error']:
+ print(f"❌ {pred['model_name']}: ERROR - {pred['error']}")
+ else:
+ print(f"❌ {pred['model_name']}: No prediction")
+
+ # Save visualization
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+ vis_filename = f"session_{session_count:03d}_{timestamp}.png"
+ vis_path = os.path.join(output_dir, vis_filename)
+
+ try:
+ save_prediction_visualization(last_screenshot, user_input, predictions, vis_path)
+ print(f"\n💾 Visualization saved to: {vis_path}")
+ except Exception as e:
+ print(f"⚠️ Error saving visualization: {e}")
+
+ print(f"\n✨ Session {session_count} completed!")
+
+ except KeyboardInterrupt:
+ print("\n\n👋 Interrupted by user. Goodbye!")
+ break
+ except Exception as e:
+ print(f"\n❌ Unexpected error: {e}")
+ print("Continuing...")
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except KeyboardInterrupt:
+ print("\n👋 Goodbye!")
+ except Exception as e:
+ print(f"❌ Fatal error: {e}")
diff --git a/libs/python/agent/benchmarks/models/__init__.py b/libs/python/agent/benchmarks/models/__init__.py
new file mode 100644
index 00000000..8af66c3d
--- /dev/null
+++ b/libs/python/agent/benchmarks/models/__init__.py
@@ -0,0 +1,3 @@
+from .base import ModelProtocol
+
+__all__ = ["ModelProtocol"]
diff --git a/libs/python/agent/benchmarks/models/base.py b/libs/python/agent/benchmarks/models/base.py
new file mode 100644
index 00000000..8ad100a3
--- /dev/null
+++ b/libs/python/agent/benchmarks/models/base.py
@@ -0,0 +1,36 @@
+"""
+Base protocol for benchmark models.
+"""
+
+from typing import Protocol, Optional, Tuple
+from PIL import Image
+
+
+class ModelProtocol(Protocol):
+ """Protocol for benchmark models that can predict click coordinates."""
+
+ @property
+ def model_name(self) -> str:
+ """Return the name of the model."""
+ ...
+
+ async def load_model(self) -> None:
+ """Load the model into memory."""
+ ...
+
+ async def unload_model(self) -> None:
+ """Unload the model from memory."""
+ ...
+
+ async def predict_click(self, image: Image.Image, instruction: str) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates for the given image and instruction.
+
+ Args:
+ image: PIL Image to analyze
+ instruction: Text instruction describing what to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ ...
diff --git a/libs/python/agent/benchmarks/models/gta1.py b/libs/python/agent/benchmarks/models/gta1.py
new file mode 100644
index 00000000..a1dee599
--- /dev/null
+++ b/libs/python/agent/benchmarks/models/gta1.py
@@ -0,0 +1,162 @@
+"""
+GTA1 model implementation for benchmarking.
+"""
+
+from typing import Optional, Tuple
+from PIL import Image
+import torch
+import re
+import gc
+from qwen_vl_utils import process_vision_info, smart_resize
+from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
+
+from .base import ModelProtocol
+
+
+class GTA1Model:
+ """Ground truth GTA1 model implementation."""
+
+ def __init__(self, model_path: str = "HelloKKMe/GTA1-7B"):
+ self.model_path = model_path
+ self.model = None
+ self.processor = None
+ self.max_new_tokens = 32
+
+ self.system_prompt = '''
+You are an expert UI element locator. Given a GUI image and a user's element description, provide the coordinates of the specified element as a single (x,y) point. The image resolution is height {height} and width {width}. For elements with area, return the center point.
+
+Output the coordinate pair exactly:
+(x,y)
+'''.strip()
+
+ @property
+ def model_name(self) -> str:
+ """Return the name of the model."""
+ return f"GTA1-{self.model_path.split('/')[-1]}"
+
+ async def load_model(self) -> None:
+ """Load the model into memory."""
+ if self.model is None:
+ print(f"Loading GTA1 model: {self.model_path}")
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
+ self.model_path,
+ torch_dtype=torch.bfloat16,
+ device_map="auto"
+ )
+ self.processor = AutoProcessor.from_pretrained(
+ self.model_path,
+ min_pixels=3136,
+ max_pixels=4096 * 2160
+ )
+ print("GTA1 model loaded successfully")
+
+ async def unload_model(self) -> None:
+ """Unload the model from memory."""
+ if self.model is not None:
+ print("Unloading GTA1 model from GPU...")
+ del self.model
+ del self.processor
+ self.model = None
+ self.processor = None
+ gc.collect()
+ if torch.cuda.is_available():
+ torch.cuda.empty_cache()
+ print("GTA1 model unloaded")
+
+ def _extract_coordinates(self, raw_string: str) -> Tuple[int, int]:
+ """Extract coordinates from model output."""
+ try:
+ matches = re.findall(r"\((-?\d*\.?\d+),\s*(-?\d*\.?\d+)\)", raw_string)
+ return tuple(map(int, map(float, matches[0]))) # type: ignore
+ except:
+ return (0, 0)
+
+ async def predict_click(self, image: Image.Image, instruction: str) -> Optional[Tuple[int, int]]:
+ """
+ Predict click coordinates for the given image and instruction.
+
+ Args:
+ image: PIL Image to analyze
+ instruction: Text instruction describing what to click
+
+ Returns:
+ Tuple of (x, y) coordinates or None if prediction fails
+ """
+ if self.model is None or self.processor is None:
+ await self.load_model()
+
+ assert self.processor is not None
+ assert self.model is not None
+
+ try:
+ width, height = image.width, image.height
+
+ # Resize image according to processor requirements
+ resized_height, resized_width = smart_resize(
+ image.height,
+ image.width,
+ factor=self.processor.image_processor.patch_size * self.processor.image_processor.merge_size,
+ min_pixels=self.processor.image_processor.min_pixels,
+ max_pixels=self.processor.image_processor.max_pixels,
+ )
+ resized_image = image.resize((resized_width, resized_height))
+ scale_x, scale_y = width / resized_width, height / resized_height
+
+ # Prepare messages
+ system_message = {
+ "role": "system",
+ "content": self.system_prompt.format(height=resized_height, width=resized_width)
+ }
+
+ user_message = {
+ "role": "user",
+ "content": [
+ {"type": "image", "image": resized_image},
+ {"type": "text", "text": instruction}
+ ]
+ }
+
+ # Process inputs
+ image_inputs, video_inputs = process_vision_info([system_message, user_message]) # type: ignore
+ text = self.processor.apply_chat_template(
+ [system_message, user_message],
+ tokenize=False,
+ add_generation_prompt=True
+ )
+ inputs = self.processor(
+ text=[text],
+ images=image_inputs,
+ videos=video_inputs,
+ padding=True,
+ return_tensors="pt"
+ )
+ inputs = inputs.to(self.model.device)
+
+ # Generate prediction
+ output_ids = self.model.generate(
+ **inputs,
+ max_new_tokens=self.max_new_tokens,
+ do_sample=False,
+ temperature=1.0,
+ use_cache=True
+ )
+ generated_ids = [
+ output_ids[len(input_ids):]
+ for input_ids, output_ids in zip(inputs.input_ids, output_ids)
+ ]
+ output_text = self.processor.batch_decode(
+ generated_ids,
+ skip_special_tokens=True,
+ clean_up_tokenization_spaces=True
+ )[0]
+
+ # Extract and rescale coordinates
+ pred_x, pred_y = self._extract_coordinates(output_text)
+ pred_x = int(pred_x * scale_x)
+ pred_y = int(pred_y * scale_y)
+
+ return (pred_x, pred_y)
+
+ except Exception as e:
+ print(f"Error in GTA1 prediction: {e}")
+ return None
diff --git a/libs/python/agent/benchmarks/ss-pro.py b/libs/python/agent/benchmarks/ss-pro.py
new file mode 100644
index 00000000..80e5e72f
--- /dev/null
+++ b/libs/python/agent/benchmarks/ss-pro.py
@@ -0,0 +1,186 @@
+#!/usr/bin/env python3
+"""
+ScreenSpot-Pro Benchmark Script
+
+Evaluates models on the ScreenSpot-Pro dataset for click prediction accuracy.
+Supports both ComputerAgent model strings and custom model classes.
+"""
+
+import argparse
+import asyncio
+import random
+import statistics
+import time
+from typing import Optional
+
+from datasets import load_dataset
+from tqdm import tqdm
+
+from utils import (
+ ModelWrapper,
+ is_click_in_bbox,
+ save_results_to_markdown,
+ save_visualizations,
+ get_available_models,
+ get_gpu_memory
+)
+
+
+async def evaluate_model(model_wrapper: ModelWrapper, dataset, max_samples: Optional[int] = None) -> dict:
+ """
+ Evaluate a model on the ScreenSpot-Pro dataset.
+
+ Args:
+ model_wrapper: ModelWrapper instance
+ dataset: ScreenSpot-Pro dataset (list of samples)
+ max_samples: Maximum number of samples to evaluate (None for all)
+
+ Returns:
+ Dictionary with evaluation results
+ """
+ print(f"\nEvaluating model: {model_wrapper.model_name}")
+
+ # Load model
+ await model_wrapper.load_model()
+
+ total_samples = len(dataset)
+ if max_samples is not None:
+ total_samples = min(max_samples, total_samples)
+
+ correct_predictions = 0
+ error_predictions = 0
+ results = []
+
+ for i in tqdm(range(total_samples), desc=f"Evaluating {model_wrapper.model_name}"):
+ sample = dataset[i]
+
+ # Extract sample data
+ image = sample['image']
+ instruction = sample['instruction']
+ bbox = sample['bbox'] # [x1, y1, x2, y2]
+ sample_id = sample['img_filename']
+
+ # Predict click coordinates with timing
+ start_time = time.time()
+ click_coords = await model_wrapper.predict_click(image, instruction)
+ prediction_time = time.time() - start_time
+
+ # Check if prediction is correct
+ is_correct = is_click_in_bbox(click_coords, bbox)
+
+ if is_correct:
+ correct_predictions += 1
+
+ results.append({
+ 'id': sample_id,
+ 'instruction': instruction,
+ 'bbox': bbox,
+ 'predicted_coords': click_coords,
+ 'is_correct': is_correct,
+ 'failed': False,
+ 'prediction_time': prediction_time
+ })
+
+ # Unload model
+ await model_wrapper.unload_model()
+
+ # Calculate metrics
+ accuracy = correct_predictions / total_samples if total_samples > 0 else 0.0
+ error_rate = error_predictions / total_samples if total_samples > 0 else 0.0
+
+ # Calculate timing statistics
+ successful_times = [r['prediction_time'] for r in results if not r['failed']]
+ avg_prediction_time = sum(successful_times) / len(successful_times) if successful_times else 0.0
+ median_prediction_time = statistics.median(successful_times) if successful_times else 0.0
+ min_prediction_time = min(successful_times) if successful_times else 0.0
+ max_prediction_time = max(successful_times) if successful_times else 0.0
+
+ # Get VRAM statistics
+ vram_stats = model_wrapper.get_vram_stats()
+
+ return {
+ 'model_name': model_wrapper.model_name,
+ 'total_samples': total_samples,
+ 'correct_predictions': correct_predictions,
+ 'failed_predictions': error_predictions,
+ 'accuracy': accuracy,
+ 'failure_rate': error_rate,
+ 'avg_prediction_time': avg_prediction_time,
+ 'median_prediction_time': median_prediction_time,
+ 'min_prediction_time': min_prediction_time,
+ 'max_prediction_time': max_prediction_time,
+ 'vram_max_mb': vram_stats['max_mb'],
+ 'vram_avg_mb': vram_stats['avg_mb'],
+ 'results': results
+ }
+
+
+async def main():
+ """
+ Main function to run the benchmark.
+ """
+ # Parse command line arguments
+ parser = argparse.ArgumentParser(description='ScreenSpot-Pro Benchmark Script')
+ parser.add_argument('--samples', type=int, default=300,
+ help='Number of samples to evaluate (default: 300)')
+ parser.add_argument('--seed', type=int, default=42,
+ help='Random seed for shuffling (default: 42)')
+ args = parser.parse_args()
+
+ # Set random seed
+ random.seed(args.seed)
+
+ # Load dataset
+ print("Loading ScreenSpot-Pro dataset...")
+ ds = load_dataset("lmms-lab/ScreenSpot-Pro")
+ dataset = ds['train'] # type: ignore
+ # Convert to list to support indexing
+ dataset_list = list(dataset)
+ print(f"Dataset loaded: {len(dataset_list)} samples")
+
+ # Shuffle dataset with seed
+ random.shuffle(dataset_list)
+ print(f"Dataset shuffled with seed {args.seed}")
+
+ # Get available models
+ models = get_available_models()
+
+ # Evaluation settings
+ max_samples = args.samples # Use command line argument
+
+ # Run evaluations
+ all_results = []
+
+ for model in models:
+ model_wrapper = ModelWrapper(model)
+ result = await evaluate_model(model_wrapper, dataset_list, max_samples)
+ all_results.append(result)
+
+ # Print summary
+ print(f"\n{result['model_name']} Results:")
+ print(f" Accuracy: {result['accuracy']*100:.2f}%")
+ print(f" Correct: {result['correct_predictions']}/{result['total_samples']}")
+ print(f" Errors: {result['failed_predictions']}")
+ print(f" Error Rate: {result['failure_rate']*100:.2f}%")
+ print(f" Avg Time: {result['avg_prediction_time']:.2f}s")
+ print(f" Median Time: {result['median_prediction_time']:.2f}s")
+ print(f" Time Range: {result['min_prediction_time']:.2f}s - {result['max_prediction_time']:.2f}s")
+ print(f" VRAM Max: {result['vram_max_mb']:.1f}MB")
+ print(f" VRAM Avg: {result['vram_avg_mb']:.1f}MB")
+
+ # Print GPU memory info
+ gpu_memory = get_gpu_memory()
+ if gpu_memory and gpu_memory[0] > 0:
+ print(f" GPU Free Memory: {gpu_memory[0]:.1f}MB")
+
+ # Save results
+ if all_results:
+ save_results_to_markdown(all_results)
+ save_visualizations(all_results, dataset_list)
+ print("\nBenchmark completed successfully!")
+ else:
+ print("\nNo successful evaluations completed.")
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/libs/python/agent/benchmarks/ss-v2.py b/libs/python/agent/benchmarks/ss-v2.py
new file mode 100644
index 00000000..dab1d4b1
--- /dev/null
+++ b/libs/python/agent/benchmarks/ss-v2.py
@@ -0,0 +1,206 @@
+#!/usr/bin/env python3
+"""
+ScreenSpot-v2 Benchmark Script
+
+Evaluates models on the ScreenSpot-v2 dataset for click prediction accuracy.
+Supports both ComputerAgent model strings and custom model classes.
+"""
+
+import argparse
+import asyncio
+import random
+import statistics
+import time
+from typing import Optional
+
+from datasets import load_dataset
+from tqdm import tqdm
+
+from utils import (
+ ModelWrapper,
+ is_click_in_bbox,
+ save_results_to_markdown,
+ save_visualizations,
+ get_available_models,
+ get_gpu_memory
+)
+
+
+async def evaluate_model(model_wrapper: ModelWrapper, samples, max_samples: Optional[int] = None) -> dict:
+ """
+ Evaluate a model on any iterable of samples.
+
+ Args:
+ model_wrapper: ModelWrapper instance
+ samples: Iterable of dicts with keys: image, bbox, instruction
+ max_samples: Maximum number of samples to evaluate (None for all)
+
+ Returns:
+ Dictionary with evaluation results
+ """
+ print(f"\nEvaluating model: {model_wrapper.model_name}")
+
+ # Load model
+ await model_wrapper.load_model()
+
+ # Convert to list if needed and limit samples
+ if hasattr(samples, '__len__'):
+ total_samples = len(samples)
+ if max_samples is not None:
+ total_samples = min(max_samples, total_samples)
+ sample_list = list(samples)[:total_samples]
+ else:
+ # For iterators, take max_samples or all
+ sample_list = list(samples)
+ if max_samples is not None:
+ sample_list = sample_list[:max_samples]
+ total_samples = len(sample_list)
+
+ correct_predictions = 0
+ error_predictions = 0
+ results = []
+
+ for i, sample in enumerate(tqdm(sample_list, desc=f"Evaluating {model_wrapper.model_name}")):
+ # Extract required data (only these 3 keys matter)
+ image = sample['image']
+ instruction = sample['instruction']
+ bbox = sample['bbox'] # [x1, y1, x2, y2]
+
+ # Predict click coordinates with timing
+ start_time = time.time()
+ click_coords = await model_wrapper.predict_click(image, instruction)
+ prediction_time = time.time() - start_time
+
+ # Check if prediction is correct
+ is_correct = is_click_in_bbox(click_coords, bbox)
+
+ if is_correct:
+ correct_predictions += 1
+
+ results.append({
+ 'sample_idx': i,
+ 'instruction': instruction,
+ 'bbox': bbox,
+ 'predicted_coords': click_coords,
+ 'is_correct': is_correct,
+ 'failed': False,
+ 'prediction_time': prediction_time
+ })
+
+ # Unload model
+ await model_wrapper.unload_model()
+
+ # Calculate metrics
+ accuracy = correct_predictions / total_samples if total_samples > 0 else 0.0
+ error_rate = error_predictions / total_samples if total_samples > 0 else 0.0
+
+ # Calculate timing statistics
+ successful_times = [r['prediction_time'] for r in results if not r['failed']]
+ avg_prediction_time = sum(successful_times) / len(successful_times) if successful_times else 0.0
+ median_prediction_time = statistics.median(successful_times) if successful_times else 0.0
+ min_prediction_time = min(successful_times) if successful_times else 0.0
+ max_prediction_time = max(successful_times) if successful_times else 0.0
+
+ # Get VRAM statistics
+ vram_stats = model_wrapper.get_vram_stats()
+
+ return {
+ 'model_name': model_wrapper.model_name,
+ 'total_samples': total_samples,
+ 'correct_predictions': correct_predictions,
+ 'failed_predictions': error_predictions,
+ 'accuracy': accuracy,
+ 'failure_rate': error_rate,
+ 'avg_prediction_time': avg_prediction_time,
+ 'median_prediction_time': median_prediction_time,
+ 'min_prediction_time': min_prediction_time,
+ 'max_prediction_time': max_prediction_time,
+ 'vram_max_mb': vram_stats['max_mb'],
+ 'vram_avg_mb': vram_stats['avg_mb'],
+ 'results': results
+ }
+
+
+async def main():
+ """
+ Main function to run the benchmark.
+ """
+ # Parse command line arguments
+ parser = argparse.ArgumentParser(description='ScreenSpot-v2 Benchmark Script')
+ parser.add_argument('--samples', type=int, default=500,
+ help='Number of samples to evaluate (default: 500)')
+ parser.add_argument('--seed', type=int, default=42,
+ help='Random seed for shuffling (default: 42)')
+ args = parser.parse_args()
+
+ # Set random seed
+ random.seed(args.seed)
+
+ # Load dataset
+ print("Loading ScreenSpot-v2 dataset...")
+ ds = load_dataset("lmms-lab/ScreenSpot-v2")
+ dataset = ds['train'] # type: ignore
+ # Convert to simple list of dicts with only required keys
+ samples = []
+ for item in dataset:
+ # Convert dataset item to dict if needed
+ item_dict = dict(item) if hasattr(item, 'keys') else item
+
+ # Convert ScreenSpot-v2 bbox format [x, y, w, h] to [x1, y1, x2, y2]
+ bbox_xywh = item_dict['bbox'] # type: ignore
+ x, y, w, h = bbox_xywh
+ bbox_xyxy = [x, y, x + w, y + h]
+
+ samples.append({
+ 'image': item_dict['image'], # type: ignore
+ 'instruction': item_dict['instruction'], # type: ignore
+ 'bbox': bbox_xyxy
+ })
+ print(f"Dataset loaded: {len(samples)} samples")
+
+ # Shuffle samples with seed
+ random.shuffle(samples)
+ print(f"Samples shuffled with seed {args.seed}")
+
+ # Get available models
+ models = get_available_models()
+
+ # Evaluation settings
+ max_samples = args.samples # Use command line argument
+
+ # Run evaluations
+ all_results = []
+
+ for model in models:
+ model_wrapper = ModelWrapper(model)
+ result = await evaluate_model(model_wrapper, samples, max_samples)
+ all_results.append(result)
+
+ # Print summary
+ print(f"\n{result['model_name']} Results:")
+ print(f" Accuracy: {result['accuracy']*100:.2f}%")
+ print(f" Correct: {result['correct_predictions']}/{result['total_samples']}")
+ print(f" Errors: {result['failed_predictions']}")
+ print(f" Error Rate: {result['failure_rate']*100:.2f}%")
+ print(f" Avg Time: {result['avg_prediction_time']:.2f}s")
+ print(f" Median Time: {result['median_prediction_time']:.2f}s")
+ print(f" Time Range: {result['min_prediction_time']:.2f}s - {result['max_prediction_time']:.2f}s")
+ print(f" VRAM Max: {result['vram_max_mb']:.1f}MB")
+ print(f" VRAM Avg: {result['vram_avg_mb']:.1f}MB")
+
+ # Print GPU memory info
+ gpu_memory = get_gpu_memory()
+ if gpu_memory and gpu_memory[0] > 0:
+ print(f" GPU Free Memory: {gpu_memory[0]:.1f}MB")
+
+ # Save results
+ if all_results:
+ save_results_to_markdown(all_results, "screenspot_v2_results.md", title="ScreenSpot-v2 Benchmark Results")
+ save_visualizations(all_results, samples)
+ print("\nBenchmark completed successfully!")
+ else:
+ print("\nNo successful evaluations completed.")
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/libs/python/agent/benchmarks/utils.py b/libs/python/agent/benchmarks/utils.py
new file mode 100644
index 00000000..d7ef4445
--- /dev/null
+++ b/libs/python/agent/benchmarks/utils.py
@@ -0,0 +1,409 @@
+#!/usr/bin/env python3
+"""
+Shared utilities for ScreenSpot-Pro benchmarking and interactive testing.
+"""
+
+import dotenv
+dotenv.load_dotenv()
+
+import asyncio
+import base64
+import os
+import sys
+import subprocess as sp
+import statistics
+from datetime import datetime
+from io import BytesIO
+from typing import List, Union, Tuple, Optional
+
+from PIL import Image, ImageDraw
+from tqdm import tqdm
+import gc
+import torch
+
+# Add parent directory to path for imports
+sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
+from agent.agent import ComputerAgent
+from models.base import ModelProtocol
+
+def get_gpu_memory() -> List[int]:
+ """
+ Get GPU memory usage using nvidia-smi.
+
+ Returns:
+ List of free memory values in MB for each GPU
+ """
+ try:
+ command = "nvidia-smi --query-gpu=memory.free --format=csv"
+ memory_free_info = sp.check_output(command.split()).decode('ascii').split('\n')[:-1][1:]
+ memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)]
+ return memory_free_values
+ except (sp.CalledProcessError, FileNotFoundError, IndexError):
+ # Fallback to torch if nvidia-smi is not available
+ if torch.cuda.is_available():
+ device = torch.cuda.current_device()
+ total = torch.cuda.get_device_properties(device).total_memory / 1024 / 1024
+ reserved = torch.cuda.memory_reserved(device) / 1024 / 1024
+ return [int(total - reserved)]
+ return [0]
+
+
+def get_vram_usage() -> dict:
+ """
+ Get current VRAM usage statistics.
+
+ Returns:
+ Dictionary with VRAM usage info (in MB)
+ """
+ if torch.cuda.is_available():
+ device = torch.cuda.current_device()
+ allocated = torch.cuda.memory_allocated(device) / 1024 / 1024 # Convert to MB
+ reserved = torch.cuda.memory_reserved(device) / 1024 / 1024 # Convert to MB
+ total = torch.cuda.get_device_properties(device).total_memory / 1024 / 1024
+ return {
+ 'allocated_mb': allocated,
+ 'reserved_mb': reserved,
+ 'total_mb': total,
+ 'free_mb': total - reserved
+ }
+ else:
+ return {
+ 'allocated_mb': 0.0,
+ 'reserved_mb': 0.0,
+ 'total_mb': 0.0,
+ 'free_mb': 0.0
+ }
+
+
+def get_available_models() -> List[Union[str, ModelProtocol]]:
+ """
+ Get list of available models for testing.
+
+ Returns:
+ List of model strings and model classes
+ """
+ local_provider = "huggingface-local/" # Options: huggingface-local/ or mlx/
+
+ # from models.gta1 import GTA1Model
+
+ models = [
+ # === ComputerAgent model strings ===
+ "openai/computer-use-preview",
+ "anthropic/claude-opus-4-20250514",
+ # f"{local_provider}HelloKKMe/GTA1-7B",
+ # f"{local_provider}HelloKKMe/GTA1-32B",
+ "openai/computer-use-preview+openai/gpt-4o-mini",
+ "anthropic/claude-opus-4-20250514+openai/gpt-4o-mini",
+
+ # === Reference model classes ===
+ # GTA1Model("HelloKKMe/GTA1-7B"),
+ # GTA1Model("HelloKKMe/GTA1-32B"),
+ ]
+
+ return models
+
+
+def is_click_in_bbox(click_coords: Optional[Tuple[int, int]], bbox: List[int]) -> bool:
+ """
+ Check if click coordinates are within the bounding box.
+
+ Args:
+ click_coords: (x, y) coordinates or None
+ bbox: [x1, y1, x2, y2] bounding box
+
+ Returns:
+ True if click is within bbox, False otherwise
+ """
+ if click_coords is None:
+ return False
+
+ x, y = click_coords
+ x1, y1, x2, y2 = bbox
+
+ return x1 <= x <= x2 and y1 <= y <= y2
+
+
+def image_to_base64(image: Image.Image) -> str:
+ """
+ Convert PIL Image to base64 string.
+
+ Args:
+ image: PIL Image
+
+ Returns:
+ Base64 encoded image string
+ """
+ buffered = BytesIO()
+ image.save(buffered, format="PNG")
+ return base64.b64encode(buffered.getvalue()).decode()
+
+
+class ModelWrapper:
+ """
+ Wrapper to provide unified interface for both ComputerAgent and custom models.
+ """
+
+ def __init__(self, model: Union[str, ModelProtocol]):
+ self.model = model
+ self.is_computer_agent = isinstance(model, str)
+ self.agent: Optional[ComputerAgent] = None
+ self.vram_usage_history: List[float] = [] # Track VRAM usage over time
+
+ if self.is_computer_agent:
+ self.model_name = str(model)
+ else:
+ self.model_name = f"{model.__class__.__name__}('{getattr(model, 'model_name', 'unknown')}')"
+
+ async def load_model(self) -> None:
+ """Load the model."""
+ if self.is_computer_agent:
+ self.agent = ComputerAgent(model=str(self.model))
+ else:
+ await self.model.load_model() # type: ignore
+
+ # Record initial VRAM usage after loading
+ vram_info = get_vram_usage()
+ self.vram_usage_history.append(vram_info['allocated_mb'])
+
+ async def unload_model(self) -> None:
+ """Unload the model."""
+ if not self.is_computer_agent:
+ await self.model.unload_model() # type: ignore
+ else:
+ del self.agent
+ self.agent = None
+ gc.collect()
+ if torch.cuda.is_available():
+ torch.cuda.empty_cache()
+
+ # Record VRAM usage after unloading
+ vram_info = get_vram_usage()
+ self.vram_usage_history.append(vram_info['allocated_mb'])
+
+ def get_vram_stats(self) -> dict:
+ """Get VRAM usage statistics for this model."""
+ if not self.vram_usage_history:
+ return {'max_mb': 0.0, 'avg_mb': 0.0}
+
+ return {
+ 'max_mb': max(self.vram_usage_history),
+ 'avg_mb': sum(self.vram_usage_history) / len(self.vram_usage_history)
+ }
+
+
+ async def predict_click(self, image: Image.Image, instruction: str) -> Optional[Tuple[int, int]]:
+ """Predict click coordinates."""
+ # Record VRAM usage before prediction
+ vram_info = get_vram_usage()
+ self.vram_usage_history.append(vram_info['allocated_mb'])
+
+ if self.is_computer_agent:
+ if self.agent is None:
+ await self.load_model()
+
+ if self.agent is not None:
+ image_b64 = image_to_base64(image)
+ result = await self.agent.predict_click(instruction=instruction, image_b64=image_b64)
+
+ # Record VRAM usage after prediction
+ vram_info = get_vram_usage()
+ self.vram_usage_history.append(vram_info['allocated_mb'])
+
+ return result
+ return None
+ else:
+ result = await self.model.predict_click(image, instruction) # type: ignore
+
+ # Record VRAM usage after prediction
+ vram_info = get_vram_usage()
+ self.vram_usage_history.append(vram_info['allocated_mb'])
+
+ return result
+
+
+def save_results_to_markdown(all_results: List[dict],output_file: str = "screenspot_pro_results.md", title: str = "ScreenSpot-Pro Benchmark Results") -> None:
+ """
+ Save evaluation results to a markdown table.
+
+ Args:
+ all_results: List of evaluation results for each model
+ output_file: Output markdown file path
+ """
+ with open(output_file, 'w', encoding='utf-8') as f:
+ f.write(f"# {title}\n\n")
+ f.write(f"**Evaluation Date:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
+
+ # Summary table
+ f.write("## Summary\n\n")
+ f.write("| Model | Total Samples | Correct | Errors | Accuracy | Error Rate | Avg Time (s) | Median Time (s) | Time Range (s) | VRAM Max (GB) | VRAM Avg (GB) |\n")
+ f.write("|-------|---------------|---------|--------|----------|------------|--------------|-----------------|----------------|---------------|---------------|\n")
+
+ for result in all_results:
+ model_name = result['model_name']
+ total = result['total_samples']
+ correct = result['correct_predictions']
+ errors = result['failed_predictions']
+ accuracy = result['accuracy'] * 100
+ error_rate = result['failure_rate'] * 100
+ avg_time = result.get('avg_prediction_time', 0.0)
+ median_time = result.get('median_prediction_time', 0.0)
+ min_time = result.get('min_prediction_time', 0.0)
+ max_time = result.get('max_prediction_time', 0.0)
+ time_range = f"{min_time:.2f} - {max_time:.2f}"
+ vram_max = result.get('vram_max_mb', 0.0) / 1024
+ vram_avg = result.get('vram_avg_mb', 0.0) / 1024
+
+ f.write(f"| {model_name} | {total} | {correct} | {errors} | {accuracy:.2f}% | {error_rate:.2f}% | {avg_time:.2f} | {median_time:.2f} | {time_range} | {vram_max:.1f} | {vram_avg:.1f} |\n")
+
+ # Detailed results for each model
+ for result in all_results:
+ f.write(f"\n## {result['model_name']} - Detailed Results\n\n")
+ f.write("| Sample Index | Instruction | BBox | Predicted | Correct | Error | Time (s) |\n")
+ f.write("|-----------|-------------|------|-----------|---------|-------|----------|\n")
+
+ for sample_result in result['results'][:10]: # Show first 10 samples
+ sample_idx = sample_result['sample_idx']
+ instruction = sample_result['instruction'][:50] + "..." if len(sample_result['instruction']) > 50 else sample_result['instruction']
+ bbox = str(sample_result['bbox'])
+ predicted = str(sample_result['predicted_coords']) if sample_result['predicted_coords'] else "None"
+ correct = "PASS" if sample_result['is_correct'] else "FAIL"
+ error = "YES" if sample_result['failed'] else "NO"
+ pred_time = sample_result.get('prediction_time', 0.0)
+
+ f.write(f"| {sample_idx} | {instruction} | {bbox} | {predicted} | {correct} | {error} | {pred_time:.2f} |\n")
+
+ if len(result['results']) > 10:
+ f.write(f"\n*Showing first 10 of {len(result['results'])} samples*\n")
+
+ print(f"\nResults saved to: {output_file}")
+
+
+def save_visualizations(all_results: List[dict], samples, output_dir: str = "output") -> None:
+ """
+ Save visualizations of predicted coordinates vs bboxes to an output folder.
+
+ Args:
+ all_results: List of evaluation results for each model
+ samples: List of sample dicts with image, bbox, instruction keys
+ output_dir: Output directory path
+ """
+ os.makedirs(output_dir, exist_ok=True)
+
+ for result in all_results:
+ model_name = result['model_name'].replace('/', '_').replace('\\', '_')
+ model_dir = os.path.join(output_dir, model_name)
+ os.makedirs(model_dir, exist_ok=True)
+
+ print(f"Saving visualizations for {result['model_name']}...")
+
+ # Save first 10 samples for visualization
+ for i, sample_result in enumerate(tqdm(result['results'][:10], desc=f"Saving {model_name} visualizations")):
+ # Get sample data using index
+ sample_idx = sample_result['sample_idx']
+
+ if sample_idx < len(samples):
+ sample = samples[sample_idx]
+ image = sample['image'].copy() # Make a copy to avoid modifying original
+ else:
+ print(f"Warning: Could not find sample at index {sample_idx}")
+ continue
+
+ bbox = sample_result['bbox']
+ predicted_coords = sample_result['predicted_coords']
+ is_correct = sample_result['is_correct']
+
+ # Draw on image
+ draw = ImageDraw.Draw(image)
+
+ # Draw bounding box (ground truth) in green
+ x1, y1, x2, y2 = bbox
+ draw.rectangle([x1, y1, x2, y2], outline="green", width=3)
+ draw.text((x1, y1-20), "Ground Truth", fill="green")
+
+ # Draw predicted click in red or blue
+ if predicted_coords is not None:
+ px, py = predicted_coords
+ color = "blue" if is_correct else "red"
+ # Draw crosshair
+ crosshair_size = 15
+ draw.line([(px-crosshair_size, py), (px+crosshair_size, py)], fill=color, width=3)
+ draw.line([(px, py-crosshair_size), (px, py+crosshair_size)], fill=color, width=3)
+ draw.text((px+10, py-20), f"Predicted ({px},{py})", fill=color)
+
+ # Add status text
+ status = "CORRECT" if is_correct else "INCORRECT"
+ status_color = "blue" if is_correct else "red"
+ draw.text((10, 10), f"Status: {status}", fill=status_color)
+ draw.text((10, 30), f"Instruction: {sample_result['instruction'][:50]}...", fill="black")
+
+ # Save image
+ filename = f"sample_{i+1:02d}_idx{sample_idx}_{status.lower()}.png"
+ filepath = os.path.join(model_dir, filename)
+ image.save(filepath)
+
+ print(f"Visualizations saved to: {model_dir}")
+
+
+def save_prediction_visualization(image: Image.Image, instruction: str, predictions: List[dict],
+ output_file: str = "interactive_prediction.png") -> None:
+ """
+ Save visualization of multiple model predictions on a single image.
+
+ Args:
+ image: PIL Image to visualize
+ instruction: Instruction text
+ predictions: List of prediction dicts with keys: model_name, coords, error
+ output_file: Output file path
+ """
+ # Create a copy of the image
+ vis_image = image.copy()
+ draw = ImageDraw.Draw(vis_image)
+
+ # Colors for different models
+ colors = ["red", "blue", "orange", "purple", "brown", "pink", "gray", "olive"]
+
+ # Draw predictions
+ for i, pred in enumerate(predictions):
+ color = colors[i % len(colors)]
+ model_name = pred['model_name']
+ coords = pred.get('coords')
+ error = pred.get('error')
+
+ if coords is not None:
+ px, py = coords
+ # Draw crosshair
+ crosshair_size = 20
+ draw.line([(px-crosshair_size, py), (px+crosshair_size, py)], fill=color, width=4)
+ draw.line([(px, py-crosshair_size), (px, py+crosshair_size)], fill=color, width=4)
+ # Draw model name
+ draw.text((px+15, py+15), f"{model_name}: ({px},{py})", fill=color)
+ else:
+ # Draw error text
+ draw.text((10, 50 + i*20), f"{model_name}: ERROR - {error}", fill=color)
+
+ # Add instruction at the top
+ draw.text((10, 10), f"Instruction: {instruction}", fill="black")
+
+ # Save image
+ vis_image.save(output_file)
+ print(f"Prediction visualization saved to: {output_file}")
+
+
+def take_screenshot() -> Image.Image:
+ """
+ Take a screenshot of the current screen.
+
+ Returns:
+ PIL Image of the screenshot
+ """
+ try:
+ import pyautogui
+ screenshot = pyautogui.screenshot()
+ return screenshot
+ except ImportError:
+ print("pyautogui not installed. Please install it with: pip install pyautogui")
+ raise
+ except Exception as e:
+ print(f"Error taking screenshot: {e}")
+ raise
+
diff --git a/libs/python/agent/example.py b/libs/python/agent/example.py
index f686b790..485f484e 100644
--- a/libs/python/agent/example.py
+++ b/libs/python/agent/example.py
@@ -5,8 +5,7 @@ Example usage of the agent library with docstring-based tool definitions.
import asyncio
import logging
-from agent import agent_loop, ComputerAgent
-from agent.types import Messages
+from agent import ComputerAgent
from computer import Computer
from computer.helpers import sandboxed
diff --git a/libs/python/agent/pyproject.toml b/libs/python/agent/pyproject.toml
index be10f729..02535760 100644
--- a/libs/python/agent/pyproject.toml
+++ b/libs/python/agent/pyproject.toml
@@ -19,10 +19,10 @@ dependencies = [
"pydantic>=2.6.4",
"rich>=13.7.1",
"python-dotenv>=1.0.1",
- "cua-computer>=0.3.0,<0.5.0",
+ "cua-computer>=0.4.0,<0.5.0",
"cua-core>=0.1.8,<0.2.0",
"certifi>=2024.2.2",
- "litellm>=1.74.8"
+ "litellm>=1.74.12"
]
requires-python = ">=3.11"
@@ -38,8 +38,15 @@ uitars-mlx = [
"mlx-vlm>=0.1.27; sys_platform == 'darwin'"
]
uitars-hf = [
+ "accelerate",
+ "torch",
"transformers>=4.54.0"
]
+glm45v-hf = [
+ "accelerate",
+ "torch",
+ "transformers-v4.55.0-GLM-4.5V-preview"
+]
ui = [
"gradio>=5.23.3",
"python-dotenv>=1.0.1",
@@ -47,18 +54,25 @@ ui = [
cli = [
"yaspin>=3.1.0",
]
+hud = [
+ "hud-python==0.2.10",
+]
all = [
# omni requirements
"ultralytics>=8.0.0",
"cua-som>=0.1.0,<0.2.0",
# uitars requirements
"mlx-vlm>=0.1.27; sys_platform == 'darwin'",
+ "accelerate",
+ "torch",
"transformers>=4.54.0",
# ui requirements
"gradio>=5.23.3",
"python-dotenv>=1.0.1",
# cli requirements
"yaspin>=3.1.0",
+ # hud requirements
+ "hud-python==0.2.10",
]
[tool.uv]
diff --git a/libs/python/computer-server/computer_server/handlers/linux.py b/libs/python/computer-server/computer_server/handlers/linux.py
index 5429b1a2..34a63de5 100644
--- a/libs/python/computer-server/computer_server/handlers/linux.py
+++ b/libs/python/computer-server/computer_server/handlers/linux.py
@@ -23,6 +23,7 @@ logger = logging.getLogger(__name__)
# This allows the server to run in headless environments
try:
import pyautogui
+ pyautogui.FAILSAFE = False
logger.info("pyautogui successfully imported, GUI automation available")
except Exception as e:
diff --git a/libs/python/computer-server/computer_server/handlers/macos.py b/libs/python/computer-server/computer_server/handlers/macos.py
index 0cba0ca3..ded73408 100644
--- a/libs/python/computer-server/computer_server/handlers/macos.py
+++ b/libs/python/computer-server/computer_server/handlers/macos.py
@@ -1,4 +1,5 @@
import pyautogui
+pyautogui.FAILSAFE = False
from pynput.mouse import Button, Controller as MouseController
from pynput.keyboard import Key, Controller as KeyboardController
import time
diff --git a/libs/python/computer-server/computer_server/handlers/windows.py b/libs/python/computer-server/computer_server/handlers/windows.py
index 485aff4a..2d91ce53 100644
--- a/libs/python/computer-server/computer_server/handlers/windows.py
+++ b/libs/python/computer-server/computer_server/handlers/windows.py
@@ -18,6 +18,7 @@ logger = logging.getLogger(__name__)
# Try to import pyautogui
try:
import pyautogui
+ pyautogui.FAILSAFE = False
logger.info("pyautogui successfully imported, GUI automation available")
except Exception as e:
logger.error(f"pyautogui import failed: {str(e)}. GUI operations will not work.")
diff --git a/libs/python/computer/pyproject.toml b/libs/python/computer/pyproject.toml
index 2e564fa9..4a9b41bb 100644
--- a/libs/python/computer/pyproject.toml
+++ b/libs/python/computer/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "pdm.backend"
[project]
name = "cua-computer"
-version = "0.3.0"
+version = "0.4.0"
description = "Computer-Use Interface (CUI) framework powering Cua"
readme = "README.md"
authors = [
diff --git a/libs/python/mcp-server/README.md b/libs/python/mcp-server/README.md
index 3f3c8bbb..a94da8a7 100644
--- a/libs/python/mcp-server/README.md
+++ b/libs/python/mcp-server/README.md
@@ -16,6 +16,21 @@
**cua-mcp-server** is a MCP server for the Computer-Use Agent (CUA), allowing you to run CUA through Claude Desktop or other MCP clients.
+
+## LiteLLM Integration
+
+This MCP server features comprehensive liteLLM integration, allowing you to use any supported LLM provider with a simple model string configuration.
+
+- **Unified Configuration**: Use a single `CUA_MODEL_NAME` environment variable with a model string
+- **Automatic Provider Detection**: The agent automatically detects the provider and capabilities from the model string
+- **Extensive Provider Support**: Works with Anthropic, OpenAI, local models, and any liteLLM-compatible provider
+
+### Model String Examples:
+- **Anthropic**: `"anthropic/claude-3-5-sonnet-20241022"`
+- **OpenAI**: `"openai/computer-use-preview"`
+- **UI-TARS**: `"huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B"`
+- **Omni + Any LiteLLM**: `"omniparser+litellm/gpt-4o"`, `"omniparser+litellm/claude-3-haiku"`, `"omniparser+ollama_chat/gemma3"`
+
### Get started with Agent
## Prerequisites
@@ -65,10 +80,7 @@ You can then use the script in your MCP configuration like this:
"command": "/bin/bash",
"args": ["~/.cua/start_mcp_server.sh"],
"env": {
- "CUA_AGENT_LOOP": "OMNI",
- "CUA_MODEL_PROVIDER": "ANTHROPIC",
- "CUA_MODEL_NAME": "claude-3-7-sonnet-20250219",
- "CUA_PROVIDER_API_KEY": "your-api-key"
+ "CUA_MODEL_NAME": "anthropic/claude-3-5-sonnet-20241022"
}
}
}
@@ -86,11 +98,7 @@ If you want to develop with the cua-mcp-server directly without installation, yo
"command": "/bin/bash",
"args": ["~/cua/libs/python/mcp-server/scripts/start_mcp_server.sh"],
"env": {
- "CUA_AGENT_LOOP": "UITARS",
- "CUA_MODEL_PROVIDER": "OAICOMPAT",
- "CUA_MODEL_NAME": "ByteDance-Seed/UI-TARS-1.5-7B",
- "CUA_PROVIDER_BASE_URL": "https://****************.us-east-1.aws.endpoints.huggingface.cloud/v1",
- "CUA_PROVIDER_API_KEY": "your-api-key"
+ "CUA_MODEL_NAME": "huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B"
}
}
}
@@ -142,10 +150,7 @@ The server is configured using environment variables (can be set in the Claude D
| Variable | Description | Default |
|----------|-------------|---------|
-| `CUA_AGENT_LOOP` | Agent loop to use (OPENAI, ANTHROPIC, UITARS, OMNI) | OMNI |
-| `CUA_MODEL_PROVIDER` | Model provider (ANTHROPIC, OPENAI, OLLAMA, OAICOMPAT) | ANTHROPIC |
-| `CUA_MODEL_NAME` | Model name to use | None (provider default) |
-| `CUA_PROVIDER_BASE_URL` | Base URL for provider API | None |
+| `CUA_MODEL_NAME` | Model string (e.g., "anthropic/claude-3-5-sonnet-20241022", "openai/computer-use-preview", "huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B", "omniparser+litellm/gpt-4o", "omniparser+ollama_chat/gemma3") | anthropic/claude-3-5-sonnet-20241022 |
| `CUA_MAX_IMAGES` | Maximum number of images to keep in context | 3 |
## Available Tools
diff --git a/libs/python/mcp-server/mcp_server/server.py b/libs/python/mcp-server/mcp_server/server.py
index 03971cb6..73996d5e 100644
--- a/libs/python/mcp-server/mcp_server/server.py
+++ b/libs/python/mcp-server/mcp_server/server.py
@@ -3,6 +3,7 @@ import base64
import logging
import os
import sys
+from tabnanny import verbose
import traceback
from typing import Any, Dict, List, Optional, Union, Tuple
@@ -28,7 +29,7 @@ except ImportError as e:
try:
from computer import Computer
- from agent import ComputerAgent, LLMProvider, LLM, AgentLoop
+ from agent import ComputerAgent
logger.debug("Successfully imported Computer and Agent modules")
except ImportError as e:
@@ -92,49 +93,27 @@ def serve() -> FastMCP:
global_computer = Computer(verbosity=logging.INFO)
await global_computer.run()
- # Determine which loop to use
- loop_str = os.getenv("CUA_AGENT_LOOP", "OMNI")
- loop = getattr(AgentLoop, loop_str)
+ # Get model name - this now determines the loop and provider
+ model_name = os.getenv("CUA_MODEL_NAME", "anthropic/claude-3-5-sonnet-20241022")
+
+ logger.info(f"Using model: {model_name}")
- # Determine provider
- provider_str = os.getenv("CUA_MODEL_PROVIDER", "ANTHROPIC")
- provider = getattr(LLMProvider, provider_str)
-
- # Get model name (if specified)
- model_name = os.getenv("CUA_MODEL_NAME", None)
-
- # Get base URL for provider (if needed)
- provider_base_url = os.getenv("CUA_PROVIDER_BASE_URL", None)
-
- # Get api key for provider (if needed)
- api_key = os.getenv("CUA_PROVIDER_API_KEY", None)
-
- # Create agent with the specified configuration
+ # Create agent with the new v0.4.x API
agent = ComputerAgent(
- computer=global_computer,
- loop=loop,
- model=LLM(
- provider=provider,
- name=model_name,
- provider_base_url=provider_base_url,
- ),
- api_key=api_key,
- save_trajectory=False,
+ model=model_name,
only_n_most_recent_images=int(os.getenv("CUA_MAX_IMAGES", "3")),
verbosity=logging.INFO,
+ tools=[global_computer]
)
+ # Create messages in the new v0.4.x format
+ messages = [{"role": "user", "content": task}]
+
# Collect all results
full_result = ""
- async for result in agent.run(task):
- logger.info(f"Agent step complete: {result.get('id', 'unknown')}")
- ctx.info(f"Agent step complete: {result.get('id', 'unknown')}")
-
- # Add response ID to output
- full_result += f"\n[Response ID: {result.get('id', 'unknown')}]\n"
-
- if "content" in result:
- full_result += f"Response: {result.get('content', '')}\n"
+ async for result in agent.run(messages):
+ logger.info(f"Agent processing step")
+ ctx.info(f"Agent processing step")
# Process output if available
outputs = result.get("output", [])
@@ -145,25 +124,23 @@ def serve() -> FastMCP:
content = output.get("content", [])
for content_part in content:
if content_part.get("text"):
- full_result += f"\nMessage: {content_part.get('text', '')}\n"
- elif output_type == "reasoning":
- logger.debug(f"Reasoning: {output}")
-
- summary_content = output.get("summary", [])
- if summary_content:
- for summary_part in summary_content:
- if summary_part.get("text"):
- full_result += f"\nReasoning: {summary_part.get('text', '')}\n"
+ full_result += f"Message: {content_part.get('text', '')}\n"
+ elif output_type == "tool_use":
+ logger.debug(f"Tool use: {output}")
+ tool_name = output.get("name", "")
+ full_result += f"Tool: {tool_name}\n"
+ elif output_type == "tool_result":
+ logger.debug(f"Tool result: {output}")
+ result_content = output.get("content", "")
+ if isinstance(result_content, list):
+ for item in result_content:
+ if item.get("type") == "text":
+ full_result += f"Result: {item.get('text', '')}\n"
else:
- full_result += f"\nReasoning: {output.get('text', output.get('content', ''))}\n"
- elif output_type == "computer_call":
- logger.debug(f"Computer call: {output}")
- action = output.get("action", "")
- result_value = output.get("result", "")
- full_result += f"\nComputer Action: {action}\nResult: {result_value}\n"
+ full_result += f"Result: {result_content}\n"
# Add separator between steps
- full_result += "\n" + "-" * 40 + "\n"
+ full_result += "\n" + "-" * 20 + "\n"
logger.info(f"CUA task completed successfully")
ctx.info(f"CUA task completed successfully")
@@ -179,7 +156,21 @@ def serve() -> FastMCP:
error_msg = f"Error running CUA task: {str(e)}\n{traceback.format_exc()}"
logger.error(error_msg)
ctx.error(error_msg)
- return f"Error during task execution: {str(e)}"
+ # Return tuple with error message and a screenshot if possible
+ try:
+ if global_computer is not None:
+ screenshot = await global_computer.interface.screenshot()
+ return (
+ f"Error during task execution: {str(e)}",
+ Image(format="png", data=screenshot)
+ )
+ except:
+ pass
+ # If we can't get a screenshot, return a placeholder
+ return (
+ f"Error during task execution: {str(e)}",
+ Image(format="png", data=b"")
+ )
@server.tool()
async def run_multi_cua_tasks(ctx: Context, tasks: List[str]) -> List:
diff --git a/libs/python/mcp-server/pyproject.toml b/libs/python/mcp-server/pyproject.toml
index ed2ad435..f80a1b6b 100644
--- a/libs/python/mcp-server/pyproject.toml
+++ b/libs/python/mcp-server/pyproject.toml
@@ -13,8 +13,8 @@ authors = [
]
dependencies = [
"mcp>=1.6.0,<2.0.0",
- "cua-agent[all]>=0.3.0,<0.4.0",
- "cua-computer>=0.3.0,<0.4.0",
+ "cua-agent[all]>=0.4.0,<0.5.0",
+ "cua-computer>=0.4.0,<0.5.0",
]
[project.scripts]
diff --git a/notebooks/agent_nb.ipynb b/notebooks/agent_nb.ipynb
index 4c39c204..61e7288a 100644
--- a/notebooks/agent_nb.ipynb
+++ b/notebooks/agent_nb.ipynb
@@ -379,7 +379,7 @@
"metadata": {},
"outputs": [],
"source": [
- "from agent.ui.gradio.app import create_gradio_ui\n",
+ "from agent.ui.gradio.ui_components import create_gradio_ui\n",
"\n",
"app = create_gradio_ui()\n",
"app.launch(share=False)"
diff --git a/notebooks/eval_osworld.ipynb b/notebooks/eval_osworld.ipynb
new file mode 100644
index 00000000..7b00795a
--- /dev/null
+++ b/notebooks/eval_osworld.ipynb
@@ -0,0 +1,110050 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# ComputerAgent HUD Integration for OSWorld\n",
+ "\n",
+ "This notebook demonstrates how to use the ComputerAgent with HUD for OSWorld benchmarking.\n",
+ "The ComputerAgent integration provides the same interface as OperatorAgent but works with both Claude and OpenAI models."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# # Install dependencies if needed\n",
+ "# !uv venv \n",
+ "# !source .venv/bin/activate\n",
+ "# !uv sync"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%load_ext autoreload\n",
+ "%autoreload 2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Required environment variables:\n",
+ "# - HUD_API_KEY (for HUD access)\n",
+ "# - ANTHROPIC_API_KEY (for Claude models)\n",
+ "# - OPENAI_API_KEY (for OpenAI models)\n",
+ "\n",
+ "from hud import gym, load_taskset\n",
+ "from pprint import pprint\n",
+ "import asyncio"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Import the HUD-integrated ComputerAgent\n",
+ "from agent.integrations.hud import ComputerAgent"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Total tasks in OSWorld: 367\n",
+ "Task prompt: Can you make my computer bring back the last tab I shut down?\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load OSWorld taskset\n",
+ "taskset = await load_taskset(\"OSWorld-Verified\")\n",
+ "print(f\"Total tasks in OSWorld: {len(taskset)}\")\n",
+ "\n",
+ "# Select a test task\n",
+ "test = taskset[148]\n",
+ "print(f\"Task prompt: {test.prompt}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Total tasks in SheetBench: 50\n",
+ "Task prompt: Given the Input data, determine the ticker with the greatest correlation between volume and next day price change.\n",
+ "- in ANSWER tab put the Ticker in A1 and the correlation in B1\n",
+ " - use CORREL to determine correlation\n",
+ "- be sure to first sort the date by ticker z to a and then date ascending before calculating nextdaypricechange %\n",
+ "Correlation should be rounded to 2 decimal points\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load SheetBench taskset\n",
+ "taskset = await load_taskset(\"SheetBench-V2\")\n",
+ "print(f\"Total tasks in SheetBench: {len(taskset)}\")\n",
+ "\n",
+ "# Select a test task\n",
+ "test = taskset[0]\n",
+ "print(f\"Task prompt: {test.prompt}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[INFO] 2025-08-08 19:08:17,078 | hud.environment | View the live trace at https://app.hud.so/trace/ca88c178-cf40-499b-8ad3-d5d60348d9fe\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Environment ready!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Create environment (takes ~2.5 minutes to start)\n",
+ "env = await gym.make(test)\n",
+ "print(\"Environment ready!\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ " "
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/plain": [
+ "'\\n \\n '"
+ ]
+ },
+ "execution_count": 8,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "await env.stream() # vnc"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Test with any supported CUA model\n",
+ "\n",
+ "The ComputerAgent integration can use Claude, OpenAI, UI-TARS, or composed models just like the original ComputerAgent:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Created agent: computeragent-computer-use-preview\n"
+ ]
+ }
+ ],
+ "source": [
+ "import logging\n",
+ "# Create ComputerAgent with Claude\n",
+ "claude_agent = ComputerAgent(\n",
+ " # model=\"anthropic/claude-3-5-sonnet-20241022\",\n",
+ " model=\"openai/computer-use-preview\",\n",
+ " # environment=\"linux\", # OSWorld typically uses Linux\n",
+ " environment=\"browser\", # SheetBench uses the browser\n",
+ " trajectory_dir=\"trajectories\",\n",
+ " verbosity=logging.INFO,\n",
+ ")\n",
+ "\n",
+ "print(f\"Created agent: {claude_agent.name}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Initial observation complete\n",
+ "========= Step 1 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:14:10,479 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "2025-08-08 19:14:18,867 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 55, 'y': 149})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [ClickAction(type='click', reasoning='Sorting dataset for analysis preparation', logs={'conversation_length': 3}, point=Point(x=77, y=174), button='left', pattern=None, hold_keys=None)]\n",
+ "========= Step 2 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:14:24,566 - agent.ComputerAgent - INFO - LLM processing started with 4 messages\n",
+ "2025-08-08 19:14:30,430 - agent.ComputerAgent - INFO - Computer: keypress({'keys': ['CTRL', 'A']})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [PressAction(type='press', reasoning='Sorting dataset for analysis preparation', logs={'conversation_length': 5}, keys=['ctrl', 'a'])]\n",
+ "========= Step 3 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:14:36,137 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "2025-08-08 19:14:42,483 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 73, 'y': 151})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [ClickAction(type='click', reasoning='Sorting dataset for analysis preparation', logs={'conversation_length': 7}, point=Point(x=102, y=176), button='left', pattern=None, hold_keys=None)]\n",
+ "========= Step 4 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:14:48,687 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "2025-08-08 19:14:59,516 - agent.ComputerAgent - INFO - Computer: keypress({'keys': ['CTRL', 'A']})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [PressAction(type='press', reasoning='Sorting dataset for analysis preparation', logs={'conversation_length': 9}, keys=['ctrl', 'a'])]\n",
+ "========= Step 5 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:15:05,229 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "2025-08-08 19:15:15,153 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 55, 'y': 147}, {'x': 319, 'y': 713}]})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [DragAction(type='drag', reasoning='Highlighting data for sorting preparation', logs={'conversation_length': 12}, path=[Point(x=77, y=172), Point(x=448, y=835)], pattern=None, hold_keys=None)]\n",
+ "========= Step 6 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:15:21,362 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "2025-08-08 19:15:33,774 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 229, 'y': 41})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [ClickAction(type='click', reasoning='Opening sort options for data', logs={'conversation_length': 15}, point=Point(x=322, y=48), button='left', pattern=None, hold_keys=None)]\n",
+ "========= Step 7 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:15:39,973 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "2025-08-08 19:15:52,928 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 430, 'y': 96})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [ClickAction(type='click', reasoning='Choosing \"Sort range\" for sorting', logs={'conversation_length': 18}, point=Point(x=604, y=112), button='left', pattern=None, hold_keys=None)]\n",
+ "========= Step 8 ==========\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-08 19:15:59,611 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "2025-08-08 19:16:17,003 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 530, 'y': 172})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Agent's action: [ClickAction(type='click', reasoning='Accessing advanced sorting options now', logs={'conversation_length': 21}, point=Point(x=745, y=201), button='left', pattern=None, hold_keys=None)]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Initial observation\n",
+ "obs, _ = await env.reset()\n",
+ "print(\"Initial observation complete\")\n",
+ "\n",
+ "# Agent loop with Claude\n",
+ "for i in range(8):\n",
+ " print(f\"========= Step {i + 1} ==========\")\n",
+ " \n",
+ " try:\n",
+ " action, done = await claude_agent.predict(obs)\n",
+ " print(f\"Agent's action: {action}\")\n",
+ "\n",
+ " obs, reward, terminated, info = await env.step(action)\n",
+ "\n",
+ " if done or terminated:\n",
+ " print(f\"Task completed after {i + 1} steps\")\n",
+ " break\n",
+ " \n",
+ " except Exception as e:\n",
+ " print(f\"Error in step {i + 1}: {e}\")\n",
+ " break"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Evaluate Results"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "=== Final Evaluation ===\n",
+ "{'error': None,\n",
+ " 'gold_file_url': 'https://gahludmjcsmszgyufydt.supabase.co//storage/v1/object/public/sheetbench/615426c8-9df7-4ffa-92e9-200134a84da9/gold_solution_2.xlsx?',\n",
+ " 'logs': 'INFO: Starting evaluation with evaluator: sheets_cell_values\\n'\n",
+ " \"INFO: Evaluator args: [{'A1': 'ABC', 'B1': '-0.08'}]\\n\"\n",
+ " 'INFO: Partial rewarding: False\\n'\n",
+ " 'INFO: Starting sheets_cell_values evaluation for environment: '\n",
+ " 'af7a34a0-43b0-44d2-82d0-2b66ed16f1ea\\n'\n",
+ " \"INFO: Raw args received: [{'A1': 'ABC', 'B1': '-0.08'}] (type: \"\n",
+ " \")\\n\"\n",
+ " 'INFO: Partial rewarding enabled: False\\n'\n",
+ " 'INFO: === Google Sheets Cell Value Verification ===\\n'\n",
+ " 'INFO: Current page URL: '\n",
+ " 'https://docs.google.com/spreadsheets/d/1h-Ec3rW9sAME2sTn8qxIvFxO6qXtdURPacEFL5DJnqw/edit?gid=700326861#gid=700326861\\n'\n",
+ " 'INFO: ✅ Confirmed on Google Sheets page\\n'\n",
+ " 'INFO: Processing args parameter...\\n'\n",
+ " 'INFO: Args is a list with 1 items, extracting first item\\n'\n",
+ " \"INFO: Extracted: {'A1': 'ABC', 'B1': '-0.08'} (type: )\\n\"\n",
+ " 'INFO: Cell checks to perform: 2 cells\\n'\n",
+ " \"INFO: A1 -> expected: 'ABC'\\n\"\n",
+ " \"INFO: B1 -> expected: '-0.08'\\n\"\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " \"sheets_cell_values: Checking cells: {'A1': 'ABC', 'B1': '-0.08'}\\n\"\n",
+ " 'INFO: === ANSWER Sheet Navigation ===\\n'\n",
+ " 'INFO: Attempt 1/3: Attempting to find and navigate to ANSWER sheet '\n",
+ " 'tab...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Attempt 1/3: Attempting to navigate to ANSWER '\n",
+ " 'sheet\\n'\n",
+ " 'INFO: Searching for ANSWER tab with selector: '\n",
+ " 'span.docs-sheet-tab-name:has-text(\"ANSWER\")\\n'\n",
+ " 'INFO: ANSWER tab search result (attempt 1): Found\\n'\n",
+ " 'INFO: ✅ Found ANSWER sheet tab on attempt 1, clicking on it...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Found ANSWER sheet tab on attempt 1, clicking on '\n",
+ " 'it\\n'\n",
+ " 'ERROR: ❌ Error navigating to ANSWER sheet on attempt 1: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'WARNING: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Error navigating to ANSWER sheet on attempt 1: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'INFO: Waiting 500ms before retry 2...\\n'\n",
+ " 'INFO: Attempt 2/3: Attempting to find and navigate to ANSWER sheet '\n",
+ " 'tab...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Attempt 2/3: Attempting to navigate to ANSWER '\n",
+ " 'sheet\\n'\n",
+ " 'INFO: Searching for ANSWER tab with selector: '\n",
+ " 'span.docs-sheet-tab-name:has-text(\"ANSWER\")\\n'\n",
+ " 'INFO: ANSWER tab search result (attempt 2): Found\\n'\n",
+ " 'INFO: ✅ Found ANSWER sheet tab on attempt 2, clicking on it...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Found ANSWER sheet tab on attempt 2, clicking on '\n",
+ " 'it\\n'\n",
+ " 'ERROR: ❌ Error navigating to ANSWER sheet on attempt 2: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'WARNING: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Error navigating to ANSWER sheet on attempt 2: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'INFO: Waiting 500ms before retry 3...\\n'\n",
+ " 'INFO: Attempt 3/3: Attempting to find and navigate to ANSWER sheet '\n",
+ " 'tab...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Attempt 3/3: Attempting to navigate to ANSWER '\n",
+ " 'sheet\\n'\n",
+ " 'INFO: Searching for ANSWER tab with selector: '\n",
+ " 'span.docs-sheet-tab-name:has-text(\"ANSWER\")\\n'\n",
+ " 'INFO: ANSWER tab search result (attempt 3): Found\\n'\n",
+ " 'INFO: ✅ Found ANSWER sheet tab on attempt 3, clicking on it...\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Found ANSWER sheet tab on attempt 3, clicking on '\n",
+ " 'it\\n'\n",
+ " 'ERROR: ❌ Error navigating to ANSWER sheet on attempt 3: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'WARNING: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Error navigating to ANSWER sheet on attempt 3: '\n",
+ " 'Locator.click: Timeout 30000ms exceeded.\\n'\n",
+ " 'Call log:\\n'\n",
+ " ' - waiting for '\n",
+ " 'locator(\"span.docs-sheet-tab-name:has-text(\\\\\"ANSWER\\\\\")\")\\n'\n",
+ " ' - - locator resolved to ANSWER\\n'\n",
+ " ' - - attempting click action\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 20ms\\n'\n",
+ " ' - 2 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 100ms\\n'\n",
+ " ' - 35 × waiting for element to be visible, enabled and stable\\n'\n",
+ " ' - - element is visible, enabled and stable\\n'\n",
+ " ' - - scrolling into view if needed\\n'\n",
+ " ' - - done scrolling\\n'\n",
+ " ' - - '\n",
+ " 'intercepts pointer events\\n'\n",
+ " ' - - retrying click action\\n'\n",
+ " ' - - waiting 500ms\\n'\n",
+ " '\\n'\n",
+ " 'WARNING: ⚠️ Failed to navigate to ANSWER sheet after 3 attempts, '\n",
+ " 'proceeding with current sheet\\n'\n",
+ " 'WARNING: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Failed to navigate to ANSWER sheet after 3 '\n",
+ " 'attempts, proceeding with current sheet\\n'\n",
+ " 'INFO: === File Content Extraction ===\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Granted read-write permissions\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Extracting page contents\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Selecting content\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Successfully extracted 157940 characters from '\n",
+ " 'file\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Found 5003 rows in content\\n'\n",
+ " 'INFO: Content extracted: 157940 characters\\n'\n",
+ " 'INFO: === Cell Content Parsing ===\\n'\n",
+ " 'INFO: Split file content into 5003 rows\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Found 5003 rows in content\\n'\n",
+ " 'INFO: First few rows of content:\\n'\n",
+ " \"INFO: Row 1: 'TradeDate | Ticker | ClosePrice | Volume | | '\\n\"\n",
+ " \"INFO: Row 2: '2023-01-02 | ABC | 476.87 | 2225355 | | '\\n\"\n",
+ " \"INFO: Row 3: '2023-01-02 | DEF | 322.21 | 3778582 | | '\\n\"\n",
+ " 'INFO: ... and 5000 more rows\\n'\n",
+ " 'INFO: === Cell Reference Parsing ===\\n'\n",
+ " \"INFO: Processing cell reference: 'A1' -> expected: 'ABC'\\n\"\n",
+ " \"INFO: Parsed 'A1' -> row=1 (0-indexed: 0), col=A (0-indexed: 0)\\n\"\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Parsed cell A1 as row=0, col=0\\n'\n",
+ " 'INFO: Row 1 exists in content\\n'\n",
+ " \"INFO: Row 1 has 6 columns: ['Col1', 'Col2', 'Col3', 'Col4', \"\n",
+ " \"'Col5', 'Col6']\\n\"\n",
+ " \"INFO: ✅ Found value for A1: 'TradeDate'\\n\"\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " \"sheets_cell_values: Found value for A1: 'TradeDate'\\n\"\n",
+ " \"INFO: Processing cell reference: 'B1' -> expected: '-0.08'\\n\"\n",
+ " \"INFO: Parsed 'B1' -> row=1 (0-indexed: 0), col=B (0-indexed: 1)\\n\"\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Parsed cell B1 as row=0, col=1\\n'\n",
+ " 'INFO: Row 1 exists in content\\n'\n",
+ " \"INFO: Row 1 has 6 columns: ['Col1', 'Col2', 'Col3', 'Col4', \"\n",
+ " \"'Col5', 'Col6']\\n\"\n",
+ " \"INFO: ✅ Found value for B1: 'Ticker'\\n\"\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " \"sheets_cell_values: Found value for B1: 'Ticker'\\n\"\n",
+ " 'INFO: === Cell Value Comparison ===\\n'\n",
+ " 'INFO: Comparing cell A1:\\n'\n",
+ " \"INFO: Expected: 'ABC' (type: )\\n\"\n",
+ " \"INFO: Actual: 'TradeDate' (type: )\\n\"\n",
+ " \"INFO: ❌ VALUE MISMATCH: 'TradeDate' != 'ABC'\\n\"\n",
+ " 'INFO: Comparing cell B1:\\n'\n",
+ " \"INFO: Expected: '-0.08' (type: )\\n\"\n",
+ " \"INFO: Actual: 'Ticker' (type: )\\n\"\n",
+ " \"INFO: ❌ VALUE MISMATCH: 'Ticker' != '-0.08'\\n\"\n",
+ " 'INFO: === Final Results ===\\n'\n",
+ " 'INFO: Cell comparison summary:\\n'\n",
+ " 'INFO: Total cells checked: 2\\n'\n",
+ " 'INFO: Matches: 0\\n'\n",
+ " 'INFO: Mismatches: 2\\n'\n",
+ " \"INFO: Failed cells: ['A1:', 'B1:']\\n\"\n",
+ " 'INFO: ❌ NOT all cells match expected values\\n'\n",
+ " 'INFO: Mismatches: [\"Cell A1: expected \\'ABC\\', got \\'TradeDate\\'\", '\n",
+ " '\"Cell B1: expected \\'-0.08\\', got \\'Ticker\\'\"]\\n'\n",
+ " 'INFO: [TASK af7a34a0-43b0-44d2-82d0-2b66ed16f1ea] '\n",
+ " 'sheets_cell_values: Mismatches found: [\"Cell A1: expected \\'ABC\\', '\n",
+ " 'got \\'TradeDate\\'\", \"Cell B1: expected \\'-0.08\\', got \\'Ticker\\'\"]\\n'\n",
+ " 'INFO: Final reward: 0.0\\n'\n",
+ " 'INFO: === Sheets Cell Values Evaluation Complete ===\\n'\n",
+ " 'INFO: Evaluation completed. Final reward: 0.0\\n',\n",
+ " 'reward': 0.0}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Evaluate environment state\n",
+ "result = await env.evaluate()\n",
+ "print(\"=== Final Evaluation ===\")\n",
+ "pprint(result)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Environment closed\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Clean up\n",
+ "await env.close()\n",
+ "print(\"Environment closed\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Run OSWorld-Verified in parallel"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v2/tasksets/OSWorld-Verified/tasks \"HTTP/1.1 200 OK\"\n",
+ "INFO:venv:Taskset OSWorld-Verified loaded successfully\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/jobs \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [0:12?:??, ?? steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:17?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:18?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:19?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:20?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5a854981-aa94-433f-9381-2964f1117035/reset \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:21?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/69393c41-bcaa-4752-9a82-e3b105fae459/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/reset \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:22?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:27:44,394 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:23?:??, ?? steps/min]2025-08-11 15:27:45,034 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:45,687 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/reset \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:24?:??, ?? steps/min]2025-08-11 15:27:46,361 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:47,040 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:47,674 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 0/7340 [1:26?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/reset \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:27?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:27:49,362 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:49,997 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 0/7340 [1:29?:??, ?? steps/min]2025-08-11 15:27:50,669 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:51,350 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:27:52,040 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 0/7340 [1:31?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:27:53,361 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:27:53,361 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 0%|----------------------------------------| 0/7340 [1:32?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:27:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:27:54,692 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:27:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 0/7340 [1:33?:??, ?? steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 0%|----------------------------------------| 0/7340 [1:34?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:27:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 0%|----------------------------------------| 0/7340 [1:35?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 15:27:58,361 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:27:58,362 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 0%|----------------------------------------| 0/7340 [1:37?:??, ?? steps/min]2025-08-11 15:27:59,758 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 15:28:01,136 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:02,532 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:02,534 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:03,818 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:03,819 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 0%|----------------------------------------| 0/7340 [1:43?:??, ?? steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:05,760 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:05,761 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "2025-08-11 15:28:06,402 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:06,404 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 753})\n",
+ "2025-08-11 15:28:07,081 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:07,082 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 138, 'y': 691})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 138, 'y': 691})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:08,470 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:08,472 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f'})\n",
+ " 0%|----------------------------------------| 3/7340 [1:47<4388:49, 1.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:09,761 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "2025-08-11 15:28:10,425 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:10,427 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:11,811 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:11,811 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 0%|----------------------------------------| 10/7340 [1:52<1368:26, 5.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:28:14,022 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 10/7340 [1:53<1383:09, 5.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 10/7340 [1:55<1407:46, 5.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:17,893 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:17,895 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 10/7340 [1:57<1430:29, 5.1 steps/min]2025-08-11 15:28:18,603 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:19,303 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 11/7340 [1:58<1316:13, 5.6 steps/min]2025-08-11 15:28:19,935 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:20,586 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 11/7340 [1:59<1330:17, 5.5 steps/min]2025-08-11 15:28:21,261 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:21,921 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 11/7340 [2:01<1345:02, 5.4 steps/min]2025-08-11 15:28:22,606 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:23,242 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:24,579 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 0%|----------------------------------------| 11/7340 [2:03<1374:31, 5.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cfefeec4-603f-4657-b0fe-7a641734693c/reset \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 12/7340 [2:05<1280:20, 5.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 12/7340 [2:06<1290:30, 5.7 steps/min]2025-08-11 15:28:28,314 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:29,612 - agent.ComputerAgent - INFO - Computer: type({'text': 'Settings'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Settings'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:30,896 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 12/7340 [2:10<1324:04, 5.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:32,239 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 13/7340 [2:11<1234:48, 5.9 steps/min]\u001b[92m15:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:32,891 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 270, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 270, 'y': 162})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3e73a324-0510-4961-a718-2e2c15df1264/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 0%|----------------------------------------| 13/7340 [2:13<1253:18, 5.8 steps/min]\u001b[92m15:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:35,507 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 427})\n",
+ "\u001b[92m15:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:36,202 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:37,541 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:38,211 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:28:38,212 - agent.ComputerAgent - INFO - Computer: click({'x': 324, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 324, 'y': 108})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 0%|----------------------------------------| 14/7340 [2:18<1204:22, 6.1 steps/min]\u001b[92m15:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:39,534 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 398})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 398})\n",
+ "\u001b[92m15:28:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:40,199 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 10})\n",
+ "\u001b[92m15:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 0%|----------------------------------------| 16/7340 [2:19<1063:33, 6.9 steps/min]2025-08-11 15:28:40,852 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 628})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 628})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:42,142 - agent.ComputerAgent - INFO - Agent: Fullscreen mode enabled in VLC so the video fills the screen. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Fullscreen mode enabled in VLC so the video fills the screen. Task completed.\n",
+ "2025-08-11 15:28:42,801 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 472\n",
+ " - prompt_tokens: 2608\n",
+ " - total_tokens: 3080\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0058\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 472\n",
+ " - prompt_tokens: 2608\n",
+ " - total_tokens: 3080\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0058\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 19/7340 [2:22<916:31, 8.0 steps/min]\u001b[92m15:28:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:28:44,161 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:44,831 - agent.ComputerAgent - INFO - Computer: click({'x': 663, 'y': 357})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 663, 'y': 357})\n",
+ " 0%|----------------------------------------| 20/7340 [2:24<878:35, 8.3 steps/min]2025-08-11 15:28:45,483 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:46,137 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:28:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 21/7340 [2:25<844:15, 8.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:47,289 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/invoke \"HTTP/1.1 200 OK\"\n",
+ " 0%|----------------------------------------| 21/7340 [2:26<850:58, 8.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 0%|----------------------------------------| 21/7340 [2:27<856:47, 8.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 39/7340 [2:28<463:19, 15.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:49,968 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:28:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8d2b35b-d513-4b1c-8d5c-6cd5afc98610/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 39/7340 [2:29<467:37, 15.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:51,295 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:28:51,945 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 39/7340 [2:31<471:40, 15.5 steps/min]2025-08-11 15:28:52,582 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:53,232 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 39/7340 [2:33<477:54, 15.3 steps/min]\u001b[92m15:28:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:28:54,611 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:28:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:28:55,300 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:28:56,614 - agent.ComputerAgent - INFO - Computer: type({'text': 'drip coffee maker'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'drip coffee maker'})\n",
+ " 1%|----------------------------------------| 39/7340 [2:35<486:08, 15.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]\u001b[92m15:28:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 40/7340 [2:36<476:58, 15.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]\u001b[92m15:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 40/7340 [2:38<483:34, 15.1 steps/min]\u001b[92m15:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]0 steps/min]\n",
+ " 1%|----------------------------------------| 40/7340 [2:41<489:53, 14.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:02,751 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 40/7340 [2:42<492:59, 14.8 steps/min]\u001b[92m15:29:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:03,389 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 102, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 102, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 40/7340 [2:43<496:33, 14.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:04,673 - agent.ComputerAgent - INFO - Computer: click({'x': 388, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 388, 'y': 128})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:05,329 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 10})\n",
+ "\u001b[92m15:29:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 41/7340 [2:44<488:10, 15.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:05,975 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:29:05,976 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 211, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 211, 'y': 211})\n",
+ "2025-08-11 15:29:06,647 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 314, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 314, 'y': 130})\n",
+ "\u001b[92m15:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 43/7340 [2:45<469:19, 15.5 steps/min]2025-08-11 15:29:07,321 - agent.ComputerAgent - INFO - Computer: click({'x': 1013, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1013, 'y': 62})\n",
+ "2025-08-11 15:29:07,966 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 46/7340 [2:49<447:09, 16.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 46/7340 [2:50<450:25, 16.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:12,384 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:29:12,385 - agent.ComputerAgent - INFO - Computer: click({'x': 472, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 472, 'y': 635})\n",
+ " 1%|----------------------------------------| 46/7340 [2:51<453:26, 16.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:13,573 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 47/7340 [2:53<448:39, 16.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:14,932 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:15,609 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,', 'x': 548, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,', 'x': 548, 'y': 74})\n",
+ " 1%|----------------------------------------| 48/7340 [2:54<442:36, 16.5 steps/min]2025-08-11 15:29:16,273 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:29:16,952 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 48/7340 [2:56<446:03, 16.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:18,286 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 48/7340 [2:57<449:22, 16.2 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:29:19,343 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m15:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 49/7340 [2:59<444:36, 16.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:21,400 - agent.ComputerAgent - INFO - Computer: click({'x': 195, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 195, 'y': 321})\n",
+ "\u001b[92m15:29:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:22,044 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 49/7340 [3:01<451:06, 16.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:23,361 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:29:23,362 - agent.ComputerAgent - INFO - Computer: click({'x': 778, 'y': 497})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 778, 'y': 497})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 51/7340 [3:03<436:43, 16.7 steps/min]\u001b[92m15:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:24,674 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ "\u001b[92m15:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:26,023 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "2025-08-11 15:29:26,705 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 439})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 439})\n",
+ " 1%|----------------------------------------| 52/7340 [3:05<434:15, 16.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:28,019 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 1%|----------------------------------------| 54/7340 [3:07<421:11, 17.3 steps/min]2025-08-11 15:29:28,685 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:29,374 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 81})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 54/7340 [3:08<424:04, 17.2 steps/min]2025-08-11 15:29:30,028 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:29:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:29:30,682 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:29:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 55/7340 [3:09<419:13, 17.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 1%|----------------------------------------| 55/7340 [3:10<421:28, 17.3 steps/min]2025-08-11 15:29:31,985 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:32,637 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 234, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 234, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:29:33,297 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:29:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 56/7340 [3:12<417:20, 17.5 steps/min]2025-08-11 15:29:33,958 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:29:34,596 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 57/7340 [3:13<412:43, 17.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:29:35,253 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:35,931 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 57/7340 [3:15<415:35, 17.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:37,092 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 57/7340 [3:16<418:03, 17.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 58/7340 [3:18<415:02, 17.5 steps/min]\u001b[92m15:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:39,702 - agent.ComputerAgent - INFO - Computer: click({'x': 550, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 550, 'y': 250})\n",
+ "\u001b[92m15:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:40,396 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 333})\n",
+ "\u001b[92m15:29:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 58/7340 [3:19<417:39, 17.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:41,062 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 129, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 129, 'y': 554})\n",
+ "2025-08-11 15:29:41,683 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:29:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:29:43,043 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 1%|----------------------------------------| 60/7340 [3:22<408:57, 17.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:44,404 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:46,393 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:29:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 62/7340 [3:26<403:42, 18.0 steps/min]2025-08-11 15:29:47,924 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:49,260 - agent.ComputerAgent - INFO - Computer: double_click({'x': 538, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 538, 'y': 81})\n",
+ "\u001b[92m15:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 62/7340 [3:28<407:49, 17.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:50,192 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 10})\n",
+ "2025-08-11 15:29:50,877 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "\u001b[92m15:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:52,427 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 467})\n",
+ " 1%|----------------------------------------| 64/7340 [3:31<400:59, 18.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:53,199 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:29:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:53,837 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 736})\n",
+ " 1%|----------------------------------------| 67/7340 [3:33<385:26, 18.9 steps/min]2025-08-11 15:29:54,494 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 68/7340 [3:34<381:32, 19.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:29:56,348 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 68/7340 [3:35<384:10, 18.9 steps/min]2025-08-11 15:29:56,995 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:29:57,663 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:29:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:29:58,974 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 1%|----------------------------------------| 68/7340 [3:38<388:53, 18.7 steps/min]\u001b[92m15:29:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:29:59,986 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 69/7340 [3:39<385:02, 18.9 steps/min]2025-08-11 15:30:01,052 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 197})\n",
+ "2025-08-11 15:30:01,715 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:02,374 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 69/7340 [3:42<390:24, 18.6 steps/min]\u001b[92m15:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:30:03,711 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:04,387 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 256, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 256, 'y': 237})\n",
+ "2025-08-11 15:30:05,022 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 70/7340 [3:44<388:08, 18.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:05,677 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:06,368 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:07,662 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f9'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f9'})\n",
+ " 1%|----------------------------------------| 71/7340 [3:46<387:05, 18.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:08,848 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 73/7340 [3:48<378:21, 19.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:09,535 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 73/7340 [3:49<380:01, 19.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:12,728 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:30:14,082 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ " 1%|----------------------------------------| 74/7340 [3:53<381:45, 19.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:30:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:16,434 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:17,090 - agent.ComputerAgent - INFO - Computer: click({'x': 577, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 577, 'y': 429})\n",
+ "2025-08-11 15:30:17,779 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 102, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 102, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:30:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:19,090 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 1%|----------------------------------------| 74/7340 [3:58<389:59, 18.6 steps/min]\u001b[92m15:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:20,424 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 570, 'scroll_x': 0, 'x': 91, 'y': 192})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 570, 'scroll_x': 0, 'x': 91, 'y': 192})\n",
+ "2025-08-11 15:30:21,088 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "\u001b[92m15:30:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:30:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:22,401 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 375})\n",
+ " 1%|----------------------------------------| 76/7340 [4:01<384:52, 18.9 steps/min]\u001b[92m15:30:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:30:23,069 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "\u001b[92m15:30:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:23,731 - agent.ComputerAgent - INFO - Computer: double_click({'x': 333, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 333, 'y': 88})\n",
+ " 1%|----------------------------------------| 79/7340 [4:02<372:08, 19.5 steps/min]2025-08-11 15:30:24,382 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:30:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:25,034 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 81/7340 [4:04<364:51, 19.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:30:26,403 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:30:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 81/7340 [4:06<367:49, 19.7 steps/min]\u001b[92m15:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:28,082 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 336})\n",
+ " 1%|----------------------------------------| 81/7340 [4:07<369:20, 19.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:30:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:30:40,629 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 112})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 112})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:41,985 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:30:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 83/7340 [4:22<382:38, 19.0 steps/min]2025-08-11 15:30:43,946 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:30:45,231 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "2025-08-11 15:30:45,906 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:30:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:30:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 84/7340 [4:25<381:45, 19.0 steps/min]2025-08-11 15:30:46,597 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:30:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:47,279 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 89})\n",
+ "2025-08-11 15:30:47,939 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 331, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 331, 'y': 268})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 1%|----------------------------------------| 85/7340 [4:27<380:05, 19.1 steps/min]2025-08-11 15:30:48,574 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:49,255 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:30:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 87/7340 [4:28<373:01, 19.4 steps/min]2025-08-11 15:30:49,904 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:30:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:50,606 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:51,265 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:30:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 87/7340 [4:30<375:52, 19.3 steps/min]2025-08-11 15:30:51,955 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:52,647 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 87/7340 [4:31<377:45, 19.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:53,344 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:30:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:54,385 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:30:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 87/7340 [4:33<380:10, 19.1 steps/min]2025-08-11 15:30:55,417 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 88/7340 [4:34<377:14, 19.2 steps/min]2025-08-11 15:30:56,064 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:30:56,694 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 88/7340 [4:35<378:58, 19.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:30:57,826 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ " 1%|----------------------------------------| 88/7340 [4:37<380:29, 19.1 steps/min]\u001b[92m15:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 88/7340 [4:38<381:54, 19.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 1%|----------------------------------------| 89/7340 [4:39<378:57, 19.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:01,029 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 89/7340 [4:40<380:32, 19.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 89/7340 [4:41<382:11, 19.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:03,414 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 562})\n",
+ " 1%|----------------------------------------| 90/7340 [4:42<379:25, 19.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:04,080 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:31:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 91/7340 [4:43<376:33, 19.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 1%|----------------------------------------| 92/7340 [4:45<375:20, 19.3 steps/min]\u001b[92m15:31:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:07,946 - agent.ComputerAgent - INFO - Computer: click({'x': 628, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 628, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:09,285 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 236})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 236})\n",
+ " 1%|----------------------------------------| 92/7340 [4:48<378:47, 19.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:09,952 - agent.ComputerAgent - INFO - Computer: click({'x': 307, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 307, 'y': 375})\n",
+ "\u001b[92m15:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:10,581 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "2025-08-11 15:31:11,261 - agent.ComputerAgent - INFO - Computer: click({'x': 361, 'y': 140})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 361, 'y': 140})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:13,265 - agent.ComputerAgent - INFO - Agent: I located the Driver License Eligibility Requirements page on the Virginia DMV site.\n",
+ "\n",
+ "Page title: Driver License Eligibility Requirements\n",
+ "Direct URL: https://www.dmv.virginia.gov/licenses-ids/license/applying/eligibility\n",
+ "\n",
+ "This page covers:\n",
+ "- New residents\n",
+ "- Military personnel\n",
+ "- Out-of-state students\n",
+ "- Non-residents\n",
+ "- Identification and residency requirements\n",
+ "- First time drivers\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I located the Driver License Eligibility Requirements page on the Virginia DMV site.\n",
+ "\n",
+ "Page title: Driver License Eligibility Requirements\n",
+ "Direct URL: https://www.dmv.virginia.gov/licenses-ids/license/applying/eligibility\n",
+ "\n",
+ "This page covers:\n",
+ "- New residents\n",
+ "- Military personnel\n",
+ "- Out-of-state students\n",
+ "- Non-residents\n",
+ "- Identification and residency requirements\n",
+ "- First time drivers\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 15:31:13,908 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 730\n",
+ " - prompt_tokens: 5320\n",
+ " - total_tokens: 6050\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0088\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 730\n",
+ " - prompt_tokens: 5320\n",
+ " - total_tokens: 6050\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0088\n",
+ " 1%|----------------------------------------| 95/7340 [4:53<372:33, 19.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:15,850 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 137})\n",
+ " 1%|----------------------------------------| 98/7340 [4:55<363:23, 19.9 steps/min]\u001b[92m15:31:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:16,503 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 121, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 121, 'y': 554})\n",
+ "\u001b[92m15:31:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:17,144 - agent.ComputerAgent - INFO - Computer: click({'x': 849, 'y': 80})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 849, 'y': 80})\n",
+ "2025-08-11 15:31:17,767 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 99/7340 [4:57<362:52, 20.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:19,099 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:31:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:31:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:19,763 - agent.ComputerAgent - INFO - Computer: click({'x': 376, 'y': 623})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 376, 'y': 623})\n",
+ " 1%|----------------------------------------| 101/7340 [4:58<357:08, 20.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 1%|----------------------------------------| 103/7340 [4:59<351:16, 20.6 steps/min]\u001b[92m15:31:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:21,556 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 330, 'y': 357})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 330, 'y': 357})\n",
+ " 1%|----------------------------------------| 103/7340 [5:00<352:27, 20.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:22,725 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:31:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:24,120 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:24,797 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 104/7340 [5:04<352:31, 20.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:25,478 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:31:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:26,128 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 105/7340 [5:05<350:39, 20.6 steps/min]2025-08-11 15:31:26,781 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:27,455 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:31:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 105/7340 [5:06<352:11, 20.5 steps/min]2025-08-11 15:31:28,108 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:28,788 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 105/7340 [5:08<353:45, 20.5 steps/min]2025-08-11 15:31:29,427 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:31:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:30,108 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:31:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ " 1%|----------------------------------------| 105/7340 [5:09<355:59, 20.3 steps/min]2025-08-11 15:31:31,455 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:31:32,135 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:32,778 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 105/7340 [5:12<358:23, 20.2 steps/min]2025-08-11 15:31:33,442 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:31:33,443 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "2025-08-11 15:31:34,112 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:31:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 105/7340 [5:13<359:51, 20.1 steps/min]2025-08-11 15:31:35,159 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 1%|----------------------------------------| 106/7340 [5:14<357:34, 20.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 106/7340 [5:15<359:06, 20.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:37,160 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:31:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:31:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:37,852 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:31:37,852 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 1%|----------------------------------------| 107/7340 [5:17<357:57, 20.2 steps/min]\u001b[92m15:31:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:39,152 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:31:39,152 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 124/7340 [5:19<309:28, 23.3 steps/min]\u001b[92m15:31:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:40,497 - agent.ComputerAgent - INFO - Computer: click({'x': 270, 'y': 622})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 270, 'y': 622})\n",
+ "\u001b[92m15:31:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:31:41,753 - agent.ComputerAgent - INFO - Computer: click({'x': 623, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 623, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 125/7340 [5:21<309:25, 23.3 steps/min]\u001b[92m15:31:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:43,786 - agent.ComputerAgent - INFO - Computer: click({'x': 385, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 385, 'y': 116})\n",
+ "\u001b[92m15:31:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 127/7340 [5:22<305:43, 23.6 steps/min]2025-08-11 15:31:44,445 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 631, 'scroll_x': 0, 'x': 91, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 631, 'scroll_x': 0, 'x': 91, 'y': 463})\n",
+ "2025-08-11 15:31:45,087 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 128/7340 [5:24<304:35, 23.7 steps/min]2025-08-11 15:31:45,726 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 596})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 596})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e73a324-0510-4961-a718-2e2c15df1264/close \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 129/7340 [5:25<303:23, 23.8 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 130/7340 [5:27<302:33, 23.8 steps/min]\u001b[92m15:31:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 130/7340 [5:28<303:49, 23.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:31:50,635 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it].6 steps/min]2025-08-11 15:31:51,555 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 130/7340 [5:30<305:49, 23.6 steps/min]2025-08-11 15:31:52,212 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:31:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]\u001b[92m15:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 130/7340 [5:33<308:37, 23.4 steps/min]\u001b[92m15:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 15:31:55,976 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:31:55,977 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 2%|----------------------------------------| 130/7340 [5:37<311:43, 23.1 steps/min]\u001b[92m15:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:58,690 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 341, 'y': 493})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 341, 'y': 493})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:31:59,334 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 480})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 480})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:00,031 - agent.ComputerAgent - INFO - Computer: double_click({'x': 412, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 412, 'y': 91})\n",
+ "\u001b[92m15:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 131/7340 [5:39<311:08, 23.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:00,693 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 85, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 85, 'y': 149})\n",
+ "2025-08-11 15:32:01,366 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "2025-08-11 15:32:02,040 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 166})\n",
+ "2025-08-11 15:32:02,732 - agent.ComputerAgent - INFO - Computer: click({'x': 487, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 487, 'y': 375})\n",
+ "\u001b[92m15:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 134/7340 [5:42<307:09, 23.5 steps/min]2025-08-11 15:32:04,028 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:32:04,029 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 524})\n",
+ "2025-08-11 15:32:04,664 - agent.ComputerAgent - INFO - Computer: click({'x': 97, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 97, 'y': 211})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:05,348 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 139/7340 [5:44<297:31, 24.2 steps/min]\u001b[92m15:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:06,008 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:32:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:06,659 - agent.ComputerAgent - INFO - Computer: click({'x': 362, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 362, 'y': 164})\n",
+ " 2%|----------------------------------------| 141/7340 [5:45<294:20, 24.5 steps/min]2025-08-11 15:32:07,300 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:07,988 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 142/7340 [5:47<293:21, 24.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/631d1f95-d2aa-4a54-be50-0bf3e09a5233/close \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 142/7340 [5:49<295:13, 24.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 142/7340 [5:50<296:04, 24.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 142/7340 [5:51<296:56, 24.2 steps/min]2025-08-11 15:32:12,946 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 142/7340 [5:52<298:04, 24.1 steps/min]2025-08-11 15:32:14,321 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:32:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:14,989 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:32:15,650 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:32:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 142/7340 [5:57<301:41, 23.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]\u001b[92m15:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]\u001b[92m15:32:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 142/7340 [5:59<303:43, 23.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "\u001b[92m15:32:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:21,662 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:32:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 142/7340 [6:00<304:54, 23.6 steps/min]2025-08-11 15:32:22,340 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 2%|----------------------------------------| 142/7340 [6:02<306:33, 23.5 steps/min]\u001b[92m15:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:24,275 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:24,944 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:25,629 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ "\u001b[92m15:32:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 142/7340 [6:04<308:13, 23.4 steps/min]2025-08-11 15:32:26,328 - agent.ComputerAgent - INFO - Computer: click({'x': 715, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 715, 'y': 189})\n",
+ "2025-08-11 15:32:26,997 - agent.ComputerAgent - INFO - Computer: click({'x': 155, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 155, 'y': 554})\n",
+ "\u001b[92m15:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:27,655 - agent.ComputerAgent - INFO - Computer: click({'x': 284, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 284, 'y': 155})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 143/7340 [6:07<308:20, 23.3 steps/min]2025-08-11 15:32:29,009 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 60})\n",
+ "2025-08-11 15:32:29,664 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 89})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:31,009 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:31,659 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 146/7340 [6:10<304:37, 23.6 steps/min]\u001b[92m15:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:32,330 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 166})\n",
+ "2025-08-11 15:32:33,028 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:32:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:32:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 148/7340 [6:12<301:32, 23.9 steps/min]2025-08-11 15:32:33,747 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 546})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 546})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 149/7340 [6:14<301:04, 23.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:36,235 - agent.ComputerAgent - INFO - Computer: click({'x': 426, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 426, 'y': 659})\n",
+ " 2%|----------------------------------------| 150/7340 [6:15<299:55, 24.0 steps/min]\u001b[92m15:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:36,937 - agent.ComputerAgent - INFO - Computer: click({'x': 442, 'y': 495})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 442, 'y': 495})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:32:38,887 - agent.ComputerAgent - INFO - Computer: type({'text': '100'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '100'})\n",
+ " 2%|----------------------------------------| 151/7340 [6:18<300:00, 24.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:32:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:40,049 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 153/7340 [6:19<296:54, 24.2 steps/min]2025-08-11 15:32:40,728 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:32:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:32:41,399 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:42,071 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:32:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:42,720 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:32:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 154/7340 [6:21<297:02, 24.2 steps/min]2025-08-11 15:32:43,391 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:32:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:44,031 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:32:45,376 - agent.ComputerAgent - INFO - Computer: type({'text': 'splash'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'splash'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 154/7340 [6:24<299:05, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:46,648 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:47,317 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 155/7340 [6:26<298:39, 24.1 steps/min]\u001b[92m15:32:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:49,032 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\nlibreoffice --headless --convert-to csv file1.xlsx\\nlibreoffice --headless --convert-to csv file2.ods\\ncat file1.csv file2.csv > output.csv\\nlibreoffice --calc output.csv\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\nlibreoffice --headless --convert-to csv file1.xlsx\\nlibreoffice --headless --convert-to csv file2.ods\\ncat file1.csv file2.csv > output.csv\\nlibreoffice --calc output.csv\\n'})\n",
+ "2025-08-11 15:32:49,711 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 248})\n",
+ " 2%|----------------------------------------| 155/7340 [6:28<300:27, 23.9 steps/min]2025-08-11 15:32:50,329 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:32:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:51,007 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 157/7340 [6:30<297:33, 24.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:51,672 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:32:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 157/7340 [6:32<299:06, 24.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:32:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:32:54,187 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 136})\n",
+ " 2%|----------------------------------------| 157/7340 [6:33<299:58, 23.9 steps/min]\u001b[92m15:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:32:54,845 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 103, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 103, 'y': 380})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 158/7340 [6:34<298:46, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:32:55,978 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 159/7340 [6:35<297:36, 24.1 steps/min]2025-08-11 15:32:56,643 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:32:57,305 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:32:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 159/7340 [6:36<298:28, 24.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 159/7340 [6:37<299:14, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:32:59,978 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 159/7340 [6:39<300:32, 23.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:01,288 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 159/7340 [6:41<302:27, 23.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:03,240 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:03,930 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 390, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 390, 'y': 345})\n",
+ " 2%|----------------------------------------| 159/7340 [6:43<303:26, 23.7 steps/min]\u001b[92m15:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:04,616 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 295})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 295})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:06,000 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 15:33:06,671 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ " 2%|----------------------------------------| 160/7340 [6:45<303:33, 23.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:08,650 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 162/7340 [6:48<301:44, 23.8 steps/min]\u001b[92m15:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:09,987 - agent.ComputerAgent - INFO - Computer: click({'x': 812, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 812, 'y': 189})\n",
+ "\u001b[92m15:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:11,307 - agent.ComputerAgent - INFO - Computer: click({'x': 102, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 102, 'y': 238})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:12,659 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 162/7340 [6:52<304:40, 23.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:14,031 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:15,326 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 15:33:15,962 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:33:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 164/7340 [6:55<303:17, 23.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:17,260 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 139})\n",
+ "2025-08-11 15:33:17,929 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:20,602 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 2%|----------------------------------------| 166/7340 [6:59<302:22, 23.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:21,282 - agent.ComputerAgent - INFO - Computer: click({'x': 371, 'y': 624})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 371, 'y': 624})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:22,607 - agent.ComputerAgent - INFO - Computer: type({'text': '100'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '100'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:23,270 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 2%|----------------------------------------| 168/7340 [7:02<300:36, 23.9 steps/min]\u001b[92m15:33:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:33:23,919 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "2025-08-11 15:33:24,594 - agent.ComputerAgent - INFO - Computer: double_click({'x': 473, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 473, 'y': 93})\n",
+ " 2%|----------------------------------------| 170/7340 [7:03<297:55, 24.1 steps/min]2025-08-11 15:33:25,220 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:33:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:25,919 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:33:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 172/7340 [7:05<295:18, 24.3 steps/min]2025-08-11 15:33:26,562 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:33:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:27,860 - agent.ComputerAgent - INFO - Computer: type({'text': 'edited_colorful.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'edited_colorful.png'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 172/7340 [7:07<297:07, 24.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:33:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:29,689 - agent.ComputerAgent - INFO - Computer: click({'x': 693, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 693, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 173/7340 [7:08<296:07, 24.2 steps/min]2025-08-11 15:33:30,343 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:31,382 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 174/7340 [7:10<295:35, 24.2 steps/min]2025-08-11 15:33:32,020 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:33:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:32,699 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 174/7340 [7:11<296:28, 24.2 steps/min]2025-08-11 15:33:33,362 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:34,058 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 174/7340 [7:13<297:25, 24.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:34,700 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:33:35,380 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 174/7340 [7:14<298:20, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:36,059 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 2%|----------------------------------------| 174/7340 [7:16<299:43, 23.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:38,043 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:39,360 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 285})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 285})\n",
+ "\u001b[92m15:33:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 174/7340 [7:18<301:01, 23.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:40,000 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:40,660 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 139})\n",
+ "\u001b[92m15:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:42,006 - agent.ComputerAgent - INFO - Computer: type({'text': 'clear cookies on exit'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'clear cookies on exit'})\n",
+ " 2%|----------------------------------------| 175/7340 [7:21<301:04, 23.8 steps/min]2025-08-11 15:33:42,645 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:33:43,335 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 46, 'y': 166}, {'x': 386, 'y': 356}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 46, 'y': 166}, {'x': 386, 'y': 356}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 177/7340 [7:23<298:59, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:45,327 - agent.ComputerAgent - INFO - Computer: double_click({'x': 244, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 244, 'y': 155})\n",
+ "\u001b[92m15:33:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:46,591 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:33:46,592 - agent.ComputerAgent - INFO - Agent: Opening the desktop and launching GIMP to convert the image to SVG.\n",
+ "INFO:agent.ComputerAgent:Agent: Opening the desktop and launching GIMP to convert the image to SVG.\n",
+ "2025-08-11 15:33:46,593 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 2%|----------------------------------------| 178/7340 [7:26<299:25, 23.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:47,979 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -525, 'scroll_x': 0, 'x': 126, 'y': 419})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -525, 'scroll_x': 0, 'x': 126, 'y': 419})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:48,644 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:33:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 2%|----------------------------------------| 180/7340 [7:27<296:56, 24.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:49,323 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 81})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:51,332 - agent.ComputerAgent - INFO - Computer: type({'text': 'focus editor'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'focus editor'})\n",
+ " 2%|----------------------------------------| 181/7340 [7:30<296:59, 24.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:33:52,512 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ " 2%|----------------------------------------| 183/7340 [7:31<294:26, 24.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:33:53,159 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m15:33:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:33:53,831 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:33:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:33:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 184/7340 [7:33<294:07, 24.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:33:55,575 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 184/7340 [7:34<294:47, 24.3 steps/min]2025-08-11 15:33:56,223 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 336, 'y': 493})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 336, 'y': 493})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:56,861 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:33:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "ERROR:asyncio:Unclosed client session\n",
+ "client_session: \n",
+ " 3%|█---------------------------------------| 184/7340 [7:36<295:51, 24.2 steps/min]2025-08-11 15:33:58,012 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:33:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:33:58,652 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:33:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 185/7340 [7:37<295:08, 24.2 steps/min]2025-08-11 15:33:59,334 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:33:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:33:59,993 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 185/7340 [7:39<296:01, 24.2 steps/min]2025-08-11 15:34:01,015 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 185/7340 [7:40<297:07, 24.1 steps/min]\u001b[92m15:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:02,373 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:03,423 - agent.ComputerAgent - INFO - Computer: click({'x': 692, 'y': 624})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 692, 'y': 624})\n",
+ " 3%|█---------------------------------------| 185/7340 [7:42<298:12, 24.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 186/7340 [7:44<297:45, 24.0 steps/min]\u001b[92m15:34:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:07,220 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:08,609 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 550, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 550, 'y': 627})\n",
+ "\u001b[92m15:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 186/7340 [7:49<300:46, 23.8 steps/min]\u001b[92m15:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:10,587 - agent.ComputerAgent - INFO - Computer: click({'x': 515, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 515, 'y': 457})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:11,253 - agent.ComputerAgent - INFO - Computer: click({'x': 905, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 905, 'y': 50})\n",
+ "\u001b[92m15:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:11,905 - agent.ComputerAgent - INFO - Computer: click({'x': 476, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 476, 'y': 169})\n",
+ " 3%|█---------------------------------------| 188/7340 [7:51<298:42, 23.9 steps/min]2025-08-11 15:34:12,560 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': -659, 'x': 18, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': -659, 'x': 18, 'y': 13})\n",
+ "\u001b[92m15:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:13,202 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 44})\n",
+ " 3%|█---------------------------------------| 191/7340 [7:52<294:43, 24.3 steps/min]2025-08-11 15:34:13,860 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "2025-08-11 15:34:14,514 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:34:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 193/7340 [7:53<292:23, 24.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 193/7340 [7:55<293:37, 24.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:17,714 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -575, 'scroll_x': 0, 'x': 90, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -575, 'scroll_x': 0, 'x': 90, 'y': 194})\n",
+ " 3%|█---------------------------------------| 193/7340 [7:56<294:20, 24.3 steps/min]\u001b[92m15:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:18,367 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ "\u001b[92m15:34:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:34:19,040 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 194/7340 [7:58<293:36, 24.3 steps/min]2025-08-11 15:34:19,683 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:34:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:20,361 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 196/7340 [7:59<291:21, 24.5 steps/min]2025-08-11 15:34:21,003 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:34:21,660 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 196/7340 [8:01<292:33, 24.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:23,387 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 196/7340 [8:02<293:16, 24.4 steps/min]2025-08-11 15:34:24,065 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "2025-08-11 15:34:24,731 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:34:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:25,787 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 197/7340 [8:06<294:10, 24.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:28,860 - agent.ComputerAgent - INFO - Computer: type({'text': ' active editor group'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' active editor group'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 197/7340 [8:08<294:56, 24.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:29,500 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:34:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:30,194 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 427})\n",
+ "2025-08-11 15:34:30,876 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 558})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 558})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 198/7340 [8:11<295:24, 24.2 steps/min]\u001b[92m15:34:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:32,811 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:34:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:34,147 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m15:34:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 200/7340 [8:13<293:33, 24.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:34,769 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 53})\n",
+ "2025-08-11 15:34:35,412 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:34:35,413 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 428})\n",
+ "2025-08-11 15:34:36,077 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 200/7340 [8:16<295:30, 24.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:38,113 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:34:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:40,185 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m15:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:40,912 - agent.ComputerAgent - INFO - Computer: move({'x': 230, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 230, 'y': 128})\n",
+ "\u001b[92m15:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 202/7340 [8:20<294:32, 24.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:41,633 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 361, 'y': 549})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 361, 'y': 549})\n",
+ "\u001b[92m15:34:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:42,290 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 184})\n",
+ "\u001b[92m15:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 203/7340 [8:21<293:50, 24.3 steps/min]2025-08-11 15:34:42,946 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 382})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:34:43,638 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 205/7340 [8:23<292:06, 24.4 steps/min]\u001b[92m15:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:44,985 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:34:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:34:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:45,658 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 206/7340 [8:24<291:23, 24.5 steps/min]2025-08-11 15:34:46,281 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:47,314 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 207/7340 [8:26<290:55, 24.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:34:47,977 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:34:48,632 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 207/7340 [8:28<292:01, 24.4 steps/min]2025-08-11 15:34:49,913 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:34:50,584 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:34:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:51,263 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:34:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 207/7340 [8:31<293:58, 24.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:34:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:53,944 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 148, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 148, 'y': 105})\n",
+ "\u001b[92m15:34:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 207/7340 [8:33<295:06, 24.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:55,256 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 477})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:34:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:34:57,289 - agent.ComputerAgent - INFO - Computer: type({'text': '100'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '100'})\n",
+ "2025-08-11 15:34:57,983 - agent.ComputerAgent - INFO - Computer: click({'x': 462, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 462, 'y': 133})\n",
+ "\u001b[92m15:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 208/7340 [8:37<295:33, 24.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:34:58,660 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 116})\n",
+ "2025-08-11 15:34:59,285 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:34:59,286 - agent.ComputerAgent - INFO - Computer: click({'x': 387, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 387, 'y': 158})\n",
+ "\u001b[92m15:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:35:00,674 - agent.ComputerAgent - INFO - Computer: click({'x': 640, 'y': 436, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 640, 'y': 436, 'button': 'left'})\n",
+ " 3%|█---------------------------------------| 211/7340 [8:39<292:45, 24.4 steps/min]2025-08-11 15:35:01,337 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 214/7340 [8:41<289:39, 24.6 steps/min]\u001b[92m15:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:03,280 - agent.ComputerAgent - INFO - Computer: double_click({'x': 213, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 213, 'y': 117})\n",
+ "\u001b[92m15:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:03,948 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 215/7340 [8:43<289:17, 24.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:05,212 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:35:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:05,863 - agent.ComputerAgent - INFO - Computer: click({'x': 610, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 610, 'y': 60})\n",
+ " 3%|█---------------------------------------| 217/7340 [8:45<287:15, 24.8 steps/min]2025-08-11 15:35:06,527 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:35:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:07,204 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:35:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 218/7340 [8:47<287:00, 24.8 steps/min]\u001b[92m15:35:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:09,599 - agent.ComputerAgent - INFO - Computer: click({'x': 385, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 385, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 218/7340 [8:49<288:18, 24.7 steps/min]2025-08-11 15:35:10,889 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 123})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:35:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 219/7340 [8:50<287:40, 24.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:12,208 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -496, 'scroll_x': 0, 'x': 90, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -496, 'scroll_x': 0, 'x': 90, 'y': 219})\n",
+ "\u001b[92m15:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:12,847 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:13,519 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 141})\n",
+ "2025-08-11 15:35:14,161 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:35:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:14,807 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:35:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 220/7340 [8:54<288:03, 24.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:35:15,833 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:35:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 222/7340 [8:55<285:57, 24.9 steps/min]2025-08-11 15:35:16,495 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:17,131 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:35:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 222/7340 [8:56<286:37, 24.8 steps/min]2025-08-11 15:35:17,814 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:18,873 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:19,539 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 222/7340 [8:58<287:54, 24.7 steps/min]2025-08-11 15:35:20,204 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:35:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:20,855 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m15:35:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 222/7340 [9:00<288:36, 24.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:35:22,016 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 222/7340 [9:01<289:13, 24.6 steps/min]2025-08-11 15:35:22,704 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:23,375 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:35:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 222/7340 [9:02<289:57, 24.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 222/7340 [9:03<290:29, 24.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:35:25,703 - agent.ComputerAgent - INFO - Computer: type({'text': 'delete browsing data on exit'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'delete browsing data on exit'})\n",
+ " 3%|█---------------------------------------| 222/7340 [9:04<291:11, 24.4 steps/min]2025-08-11 15:35:26,361 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 223/7340 [9:07<291:14, 24.4 steps/min]\u001b[92m15:35:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:29,669 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 585, 'y': 449})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 585, 'y': 449})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 3%|█---------------------------------------| 223/7340 [9:09<292:18, 24.3 steps/min]\u001b[92m15:35:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:30,981 - agent.ComputerAgent - INFO - Computer: click({'x': 586, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 586, 'y': 134})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:32,967 - agent.ComputerAgent - INFO - Computer: double_click({'x': 244, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 244, 'y': 154})\n",
+ "\u001b[92m15:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:34,310 - agent.ComputerAgent - INFO - Agent: I enhanced the color vibrancy of your photo and exported it as edited_colorful.png to your Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I enhanced the color vibrancy of your photo and exported it as edited_colorful.png to your Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 15:35:34,935 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 288\n",
+ " - prompt_tokens: 10800\n",
+ " - total_tokens: 11088\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0164\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 288\n",
+ " - prompt_tokens: 10800\n",
+ " - total_tokens: 11088\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0164\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 225/7340 [9:14<292:25, 24.3 steps/min]2025-08-11 15:35:36,296 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 105})\n",
+ "2025-08-11 15:35:36,947 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'x': 261, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'x': 261, 'y': 230})\n",
+ "\u001b[92m15:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:38,290 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 130})\n",
+ "2025-08-11 15:35:38,926 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "\u001b[92m15:35:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 228/7340 [9:18<290:09, 24.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:39,569 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:40,246 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 478})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 478})\n",
+ "\u001b[92m15:35:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 231/7340 [9:20<287:30, 24.7 steps/min]2025-08-11 15:35:41,893 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 53})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 3%|█---------------------------------------| 232/7340 [9:21<286:44, 24.8 steps/min]\u001b[92m15:35:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:43,077 - agent.ComputerAgent - INFO - Computer: click({'x': 506, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 506, 'y': 190})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 233/7340 [9:22<286:10, 24.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:35:44,347 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m15:35:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:35:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:45,032 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 547, 'scroll_x': 0, 'x': 125, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 547, 'scroll_x': 0, 'x': 125, 'y': 629})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:35:46,313 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 3%|█---------------------------------------| 234/7340 [9:25<286:13, 24.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:47,613 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 236/7340 [9:26<284:25, 25.0 steps/min]2025-08-11 15:35:48,312 - agent.ComputerAgent - INFO - Computer: click({'x': 877, 'y': 537})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 877, 'y': 537})\n",
+ "2025-08-11 15:35:49,338 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:35:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:50,012 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 236/7340 [9:29<285:36, 24.9 steps/min]2025-08-11 15:35:50,692 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:51,345 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:35:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:35:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 237/7340 [9:31<285:21, 24.9 steps/min]2025-08-11 15:35:52,714 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:35:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:53,753 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m15:35:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:54,426 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m15:35:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 3%|█---------------------------------------| 237/7340 [9:34<286:52, 24.8 steps/min]\u001b[92m15:35:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:55,767 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:35:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:35:56,466 - agent.ComputerAgent - INFO - Computer: click({'x': 501, 'y': 55})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 501, 'y': 55})\n",
+ "\u001b[92m15:35:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:35:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:35:58,194 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:35:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 238/7340 [9:37<287:10, 24.7 steps/min]2025-08-11 15:35:58,837 - agent.ComputerAgent - INFO - Computer: click({'x': 347, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 347, 'y': 186})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:35:59,467 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:35:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:35:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 239/7340 [9:38<286:34, 24.8 steps/min]2025-08-11 15:36:00,161 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 240/7340 [9:40<285:59, 24.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:01,470 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:02,165 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 3%|█---------------------------------------| 241/7340 [9:41<285:27, 24.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:03,352 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 633, 'y': 320}, {'x': 422, 'y': 393}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 633, 'y': 320}, {'x': 422, 'y': 393}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 3%|█---------------------------------------| 249/7340 [9:42<276:30, 25.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/835128b8-2a29-46f4-853f-4d70bb46a9d6/close \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 250/7340 [9:43<275:49, 25.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 250/7340 [9:45<276:35, 25.6 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:36:06,585 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:07,216 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:36:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 3%|█---------------------------------------| 250/7340 [9:46<277:12, 25.6 steps/min]2025-08-11 15:36:08,031 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]\u001b[92m15:36:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 250/7340 [9:47<277:40, 25.5 steps/min]2025-08-11 15:36:08,724 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:36:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it].6 steps/min]2025-08-11 15:36:10,400 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:36:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 3%|█---------------------------------------| 251/7340 [9:50<278:00, 25.5 steps/min]\u001b[92m15:36:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:13,134 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:36:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:13,834 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m15:36:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:15,144 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+enter'})\n",
+ " 3%|█---------------------------------------| 251/7340 [9:54<279:46, 25.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:36:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:17,159 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 382})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:18,495 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ " 3%|█---------------------------------------| 252/7340 [9:57<280:11, 25.3 steps/min]\u001b[92m15:36:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:19,204 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 344, 'y': 492})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 344, 'y': 492})\n",
+ "2025-08-11 15:36:19,835 - agent.ComputerAgent - INFO - Computer: click({'x': 329, 'y': 126})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 329, 'y': 126})\n",
+ "2025-08-11 15:36:20,469 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 70})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 70})\n",
+ "\u001b[92m15:36:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:21,126 - agent.ComputerAgent - INFO - Computer: click({'x': 154, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 154, 'y': 739})\n",
+ "\u001b[92m15:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:21,783 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ " 3%|█---------------------------------------| 253/7340 [10:00<280:34, 25.3 steps/min]\u001b[92m15:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:22,490 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 197})\n",
+ "2025-08-11 15:36:23,150 - agent.ComputerAgent - INFO - Computer: click({'x': 952, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 952, 'y': 131})\n",
+ " 4%|█---------------------------------------| 260/7340 [10:03<273:51, 25.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:36:24,845 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:36:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 260/7340 [10:04<274:18, 25.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:36:26,707 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f2'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 261/7340 [10:05<273:54, 25.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 262/7340 [10:07<273:25, 25.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:28,622 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:36:29,268 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 561})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 561})\n",
+ "\u001b[92m15:36:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:29,955 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:36:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:30,639 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:36:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 263/7340 [10:09<273:30, 25.9 steps/min]2025-08-11 15:36:31,327 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 267})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 267})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 264/7340 [10:12<273:30, 25.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:33,640 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:36:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:34,656 - agent.ComputerAgent - INFO - Computer: move({'x': 505, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 505, 'y': 215})\n",
+ "\u001b[92m15:36:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 265/7340 [10:14<273:27, 25.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:36,026 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:36:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:36,703 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 112})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 112})\n",
+ "\u001b[92m15:36:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 266/7340 [10:15<272:59, 25.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:37,374 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:38,046 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 415, 'y': 393}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 415, 'y': 393}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 267/7340 [10:17<272:49, 25.9 steps/min]2025-08-11 15:36:39,376 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:40,079 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 269/7340 [10:19<271:20, 26.1 steps/min]2025-08-11 15:36:40,750 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 116})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 269/7340 [10:20<272:03, 26.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:42,446 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:36:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:43,091 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 270/7340 [10:22<271:37, 26.0 steps/min]2025-08-11 15:36:43,796 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 386})\n",
+ "2025-08-11 15:36:44,458 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:36:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 270/7340 [10:23<272:12, 26.0 steps/min]2025-08-11 15:36:45,166 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:36:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:36:45,816 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:36:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:36:46,876 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 271/7340 [10:26<272:11, 26.0 steps/min]2025-08-11 15:36:47,572 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:36:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 272/7340 [10:27<271:54, 26.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:49,266 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:50,556 - agent.ComputerAgent - INFO - Computer: click({'x': 133, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 133, 'y': 739})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 272/7340 [10:30<273:02, 25.9 steps/min]\u001b[92m15:36:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:51,884 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:36:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:36:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:52,520 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 336})\n",
+ "\u001b[92m15:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 273/7340 [10:31<272:32, 25.9 steps/min]2025-08-11 15:36:53,219 - agent.ComputerAgent - INFO - Computer: click({'x': 603, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 603, 'y': 574})\n",
+ "2025-08-11 15:36:53,886 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:36:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:36:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:56,252 - agent.ComputerAgent - INFO - Computer: type({'text': 'Carl'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Carl'})\n",
+ " 4%|█---------------------------------------| 274/7340 [10:35<273:07, 25.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:36:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:58,236 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 276/7340 [10:37<271:55, 26.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:36:58,904 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "\u001b[92m15:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:36:59,576 - agent.ComputerAgent - INFO - Computer: click({'x': 527, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 527, 'y': 412})\n",
+ "\u001b[92m15:36:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 276/7340 [10:38<272:29, 25.9 steps/min]2025-08-11 15:37:00,202 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:37:00,837 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 280/7340 [10:41<269:25, 26.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:03,369 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 280/7340 [10:42<270:01, 26.1 steps/min]2025-08-11 15:37:04,112 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:37:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:05,531 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:37:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 280/7340 [10:45<271:13, 26.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:06,846 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 393})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 393})\n",
+ "\u001b[92m15:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:08,131 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:37:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:08,820 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+click', 'x': 505, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+click', 'x': 505, 'y': 215})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:10,168 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 282/7340 [10:49<270:53, 26.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:10,793 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:37:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:11,440 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:37:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:13,457 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 4%|█---------------------------------------| 283/7340 [10:52<271:16, 26.0 steps/min]\u001b[92m15:37:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:14,793 - agent.ComputerAgent - INFO - Computer: type({'text': 'Mount Kilimanjaro.jpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Mount Kilimanjaro.jpg'})\n",
+ "\u001b[92m15:37:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 283/7340 [10:54<272:04, 25.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:16,084 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:37:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:16,806 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 596, 'scroll_x': 0, 'x': 623, 'y': 161})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 596, 'scroll_x': 0, 'x': 623, 'y': 161})\n",
+ "2025-08-11 15:37:17,466 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 405, 'scroll_x': 0, 'x': 90, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 405, 'scroll_x': 0, 'x': 90, 'y': 433})\n",
+ "\u001b[92m15:37:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 284/7340 [10:56<271:56, 25.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:18,124 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 416, 'y': 393}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 416, 'y': 393}]})\n",
+ "\u001b[92m15:37:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:18,823 - agent.ComputerAgent - INFO - Computer: move({'x': 768, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 768, 'y': 182})\n",
+ " 4%|█---------------------------------------| 286/7340 [10:58<270:29, 26.1 steps/min]2025-08-11 15:37:19,493 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:20,156 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:37:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:20,819 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:37:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 288/7340 [11:00<269:22, 26.2 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:37:21,503 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:22,196 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:37:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 288/7340 [11:02<270:20, 26.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:37:24,397 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:37:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 289/7340 [11:03<269:51, 26.1 steps/min]2025-08-11 15:37:25,052 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:37:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:25,736 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 290/7340 [11:05<269:28, 26.2 steps/min]2025-08-11 15:37:26,426 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:27,107 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:37:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 290/7340 [11:06<269:58, 26.1 steps/min]2025-08-11 15:37:27,750 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:37:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 290/7340 [11:07<270:22, 26.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:37:28,927 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:29,618 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:37:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 291/7340 [11:09<270:25, 26.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:31,285 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:37:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 291/7340 [11:10<270:50, 26.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 292/7340 [11:11<270:16, 26.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:34,861 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 4%|█---------------------------------------| 293/7340 [11:14<270:28, 26.1 steps/min]\u001b[92m15:37:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:37:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:36,827 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 412})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:37:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:38,192 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 739})\n",
+ " 4%|█---------------------------------------| 294/7340 [11:17<270:34, 26.0 steps/min]\u001b[92m15:37:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:38,892 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 336, 'y': 493})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 336, 'y': 493})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 297/7340 [11:18<268:16, 26.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:40,141 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 53})\n",
+ "\u001b[92m15:37:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:40,776 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 203})\n",
+ " 4%|█---------------------------------------| 297/7340 [11:19<268:45, 26.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:41,936 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:37:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 299/7340 [11:21<267:35, 26.3 steps/min]\u001b[92m15:37:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:43,903 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -l *.csv\\nlibreoffice --calc output.csv\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -l *.csv\\nlibreoffice --calc output.csv\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:44,525 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:37:45,184 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ " 4%|█---------------------------------------| 299/7340 [11:24<268:37, 26.2 steps/min]\u001b[92m15:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:46,510 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:37:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 301/7340 [11:25<267:16, 26.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:47,172 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:37:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:47,840 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 117})\n",
+ "\u001b[92m15:37:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 302/7340 [11:27<266:52, 26.4 steps/min]2025-08-11 15:37:48,490 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 396, 'y': 394}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 396, 'y': 394}]})\n",
+ "2025-08-11 15:37:49,157 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 303/7340 [11:28<266:29, 26.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:37:49,824 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:37:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:37:50,845 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:37:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 305/7340 [11:30<265:32, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:52,190 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:52,873 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 190})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 305/7340 [11:32<266:19, 26.4 steps/min]\u001b[92m15:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:54,228 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m15:37:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:37:54,882 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 566, 'scroll_x': 0, 'x': 91, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 566, 'scroll_x': 0, 'x': 91, 'y': 463})\n",
+ "2025-08-11 15:37:55,517 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:37:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 308/7340 [11:34<264:24, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:56,878 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:37:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 309/7340 [11:36<263:58, 26.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:57,523 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:37:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:37:58,852 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 139})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 310/7340 [11:38<263:50, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:37:59,507 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:00,175 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 311/7340 [11:39<263:26, 26.7 steps/min]2025-08-11 15:38:00,847 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:38:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:02,190 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x08\\x08\\x08'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x08\\x08\\x08'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:02,847 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:38:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 312/7340 [11:42<263:50, 26.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:05,492 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 523})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 523})\n",
+ "\u001b[92m15:38:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 314/7340 [11:45<263:03, 26.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:06,873 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 614, 'scroll_x': 0, 'x': 592, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 614, 'scroll_x': 0, 'x': 592, 'y': 162})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:07,497 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 315/7340 [11:46<262:42, 26.7 steps/min]2025-08-11 15:38:08,187 - agent.ComputerAgent - INFO - Computer: click({'x': 561, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 561, 'y': 382})\n",
+ "2025-08-11 15:38:08,893 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 412})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 316/7340 [11:48<262:34, 26.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:10,208 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:10,914 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 741})\n",
+ " 4%|█---------------------------------------| 318/7340 [11:50<261:20, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:11,568 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:38:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 319/7340 [11:51<260:51, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:13,182 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:38:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 319/7340 [11:52<261:19, 26.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 319/7340 [11:53<261:41, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:15,298 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 320/7340 [11:54<261:16, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 320/7340 [11:55<261:45, 26.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:17,313 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:38:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:38:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:17,973 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:38:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 320/7340 [11:57<262:27, 26.7 steps/min]2025-08-11 15:38:19,279 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 162})\n",
+ "\u001b[92m15:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:19,922 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:38:20,604 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 392, 'y': 394}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 392, 'y': 394}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:21,914 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 320/7340 [12:01<263:41, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:22,608 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 323/7340 [12:02<261:43, 26.8 steps/min]\u001b[92m15:38:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:24,249 - agent.ComputerAgent - INFO - Computer: click({'x': 215, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 215, 'y': 53})\n",
+ "\u001b[92m15:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:24,875 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:38:25,529 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 168})\n",
+ "2025-08-11 15:38:26,150 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:38:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 325/7340 [12:05<260:59, 26.9 steps/min]2025-08-11 15:38:27,352 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:38:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 327/7340 [12:07<260:06, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:29,405 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 327/7340 [12:08<260:27, 26.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 4%|█---------------------------------------| 327/7340 [12:09<260:50, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a74f1790-a107-43c9-8389-0a50a5192c5f/close \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 327/7340 [12:10<261:12, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:32,732 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:38:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 327/7340 [12:12<261:39, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:33,403 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:38:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:34,425 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:38:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:38:36,800 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+j'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+j'})\n",
+ " 4%|█---------------------------------------| 328/7340 [12:16<262:14, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m15:38:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:38,739 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ " 4%|█---------------------------------------| 328/7340 [12:18<262:57, 26.7 steps/min]\u001b[92m15:38:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:38:39,382 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:38:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]2025-08-11 15:38:40,014 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:38:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77f486b6-dc2a-4a1d-bf54-fc05f9a8c3d7/close \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 328/7340 [12:19<263:24, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]\u001b[92m15:38:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 328/7340 [12:20<263:57, 26.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 328/7340 [12:21<264:19, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 328/7340 [12:22<264:43, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.65s/it]\n",
+ " 4%|█---------------------------------------| 328/7340 [12:23<265:04, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 4%|█---------------------------------------| 329/7340 [12:24<264:35, 26.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 4%|█---------------------------------------| 329/7340 [12:25<264:56, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ " 4%|█---------------------------------------| 329/7340 [12:28<265:40, 26.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.92s/it]\n",
+ "\u001b[92m15:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 4%|█---------------------------------------| 329/7340 [12:29<266:03, 26.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:50,739 - agent.ComputerAgent - INFO - Computer: click({'x': 651, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 651, 'y': 108})\n",
+ " 4%|█---------------------------------------| 329/7340 [12:30<266:25, 26.3 steps/min]\u001b[92m15:38:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:51,884 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 239})\n",
+ " 4%|█---------------------------------------| 330/7340 [12:31<265:56, 26.4 steps/min]\u001b[92m15:38:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:52,553 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 503})\n",
+ "\u001b[92m15:38:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:38:53,225 - agent.ComputerAgent - INFO - Computer: click({'x': 371, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 371, 'y': 425})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:54,535 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 5%|█---------------------------------------| 331/7340 [12:33<266:00, 26.3 steps/min]2025-08-11 15:38:55,182 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 636, 'scroll_x': 0, 'x': 103, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 636, 'scroll_x': 0, 'x': 103, 'y': 463})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:56,489 - agent.ComputerAgent - INFO - Computer: click({'x': 378, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 378, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:57,823 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 5%|█---------------------------------------| 333/7340 [12:37<265:30, 26.4 steps/min]\u001b[92m15:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:38:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:38:59,209 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|█---------------------------------------| 335/7340 [12:39<264:32, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:00,493 - agent.ComputerAgent - INFO - Computer: double_click({'x': 161, 'y': 350})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 161, 'y': 350})\n",
+ "2025-08-11 15:39:01,148 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|█---------------------------------------| 336/7340 [12:40<264:10, 26.5 steps/min]2025-08-11 15:39:01,820 - agent.ComputerAgent - INFO - Computer: click({'x': 272, 'y': 546})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 272, 'y': 546})\n",
+ "2025-08-11 15:39:02,498 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 379, 'y': 394}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 379, 'y': 394}]})\n",
+ " 5%|█---------------------------------------| 339/7340 [12:42<262:31, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|█---------------------------------------| 339/7340 [12:43<262:55, 26.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:39:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:05,819 - agent.ComputerAgent - INFO - Computer: click({'x': 995, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 995, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 339/7340 [12:45<263:19, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:06,475 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:39:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5a854981-aa94-433f-9381-2964f1117035/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 344/7340 [12:46<259:45, 26.9 steps/min]2025-08-11 15:39:07,777 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:39:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:08,416 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:39:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 344/7340 [12:47<260:12, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:09,074 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:39:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 5%|█---------------------------------------| 344/7340 [12:49<260:51, 26.8 steps/min]\u001b[92m15:39:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:11,064 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]2025-08-11 15:39:11,905 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:13,459 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+j'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+j'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 344/7340 [12:52<261:55, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:15,026 - agent.ComputerAgent - INFO - Computer: type({'text': 'delete browsing data on exit'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'delete browsing data on exit'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]6.7 steps/min]\n",
+ "2025-08-11 15:39:15,684 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:39:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:39:16,521 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 345/7340 [12:55<262:11, 26.7 steps/min]2025-08-11 15:39:17,196 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:39:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:39:18,014 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:39:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 345/7340 [12:57<262:39, 26.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:39:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:19,199 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 731, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 731, 'y': 574})\n",
+ " 5%|█---------------------------------------| 345/7340 [12:58<263:02, 26.6 steps/min]\u001b[92m15:39:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:19,871 - agent.ComputerAgent - INFO - Computer: click({'x': 80, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 80, 'y': 741})\n",
+ " 5%|█---------------------------------------| 347/7340 [13:00<262:07, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 347/7340 [13:01<262:27, 26.6 steps/min]2025-08-11 15:39:22,589 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 347/7340 [13:03<263:08, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|█---------------------------------------| 347/7340 [13:04<263:30, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:25,927 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:26,600 - agent.ComputerAgent - INFO - Computer: click({'x': 250, 'y': 543})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 250, 'y': 543})\n",
+ " 5%|█---------------------------------------| 347/7340 [13:05<263:55, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 348/7340 [13:06<263:28, 26.5 steps/min]2025-08-11 15:39:28,275 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:39:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 348/7340 [13:09<264:29, 26.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 5%|█---------------------------------------| 348/7340 [13:10<264:49, 26.4 steps/min]\u001b[92m15:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:39:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:33,207 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 396, 'y': 393}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 396, 'y': 393}]})\n",
+ " 5%|█---------------------------------------| 348/7340 [13:12<265:20, 26.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:34,497 - agent.ComputerAgent - INFO - Computer: type({'text': 'nano ~/task.py'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'nano ~/task.py'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 349/7340 [13:13<264:59, 26.4 steps/min]2025-08-11 15:39:35,150 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 358/7340 [13:14<258:19, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/49f1eefe-9bc4-430c-a6c8-83675960a057/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:39:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 5%|█---------------------------------------| 358/7340 [13:17<259:13, 26.9 steps/min]\u001b[92m15:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]6.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:40,709 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:39:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]6.9 steps/min]2025-08-11 15:39:41,379 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:39:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|█---------------------------------------| 358/7340 [13:20<260:21, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]\u001b[92m15:39:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 15:39:44,023 - agent.ComputerAgent - INFO - Agent: Done via Terminal.\n",
+ "\n",
+ "Commands executed:\n",
+ "- cd ~/Desktop\n",
+ "- libreoffice --headless --convert-to csv file1.xlsx\n",
+ "- libreoffice --headless --convert-to csv file2.ods\n",
+ "- cat file1.csv file2.csv > output.csv\n",
+ "- libreoffice --calc output.csv\n",
+ "\n",
+ "This converts both spreadsheets to CSV, concatenates their single columns into one file named output.csv, and opens it in LibreOffice Calc from the terminal.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Done via Terminal.\n",
+ "\n",
+ "Commands executed:\n",
+ "- cd ~/Desktop\n",
+ "- libreoffice --headless --convert-to csv file1.xlsx\n",
+ "- libreoffice --headless --convert-to csv file2.ods\n",
+ "- cat file1.csv file2.csv > output.csv\n",
+ "- libreoffice --calc output.csv\n",
+ "\n",
+ "This converts both spreadsheets to CSV, concatenates their single columns into one file named output.csv, and opens it in LibreOffice Calc from the terminal.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 15:39:44,642 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 679\n",
+ " - prompt_tokens: 8809\n",
+ " - total_tokens: 9488\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 679\n",
+ " - prompt_tokens: 8809\n",
+ " - total_tokens: 9488\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ " 5%|█---------------------------------------| 359/7340 [13:23<260:31, 26.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:45,831 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 213})\n",
+ " 5%|█---------------------------------------| 359/7340 [13:25<260:54, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:46,456 - agent.ComputerAgent - INFO - Computer: click({'x': 148, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 148, 'y': 739})\n",
+ "\u001b[92m15:39:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:47,105 - agent.ComputerAgent - INFO - Computer: click({'x': 984, 'y': 68})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 984, 'y': 68})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 360/7340 [13:26<260:46, 26.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:48,477 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 607, 'scroll_x': 0, 'x': 91, 'y': 464})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 607, 'scroll_x': 0, 'x': 91, 'y': 464})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|█---------------------------------------| 362/7340 [13:28<259:43, 26.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:39:49,752 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ "\u001b[92m15:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:39:50,434 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ " 5%|██--------------------------------------| 374/7340 [13:30<251:38, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a91cea7-3ffe-41c2-9405-1151904aee0c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 374/7340 [13:32<252:04, 27.6 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 374/7340 [13:33<252:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/63010886-f715-4208-aef0-b98c456e7e98/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 15:39:56,185 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 374/7340 [13:35<253:08, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:39:56,877 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:39:57,787 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]7.5 steps/min]\n",
+ "2025-08-11 15:39:58,990 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 374/7340 [13:38<254:13, 27.4 steps/min]2025-08-11 15:40:00,505 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:40:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:40:01,535 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 374/7340 [13:40<254:47, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:02,213 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 374/7340 [13:41<255:06, 27.3 steps/min]\u001b[92m15:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:03,760 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 163})\n",
+ "\u001b[92m15:40:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 374/7340 [13:42<255:28, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:04,409 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 670, 'scroll_x': 0, 'x': 499, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 670, 'scroll_x': 0, 'x': 499, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:06,425 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:40:06,426 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "2025-08-11 15:40:07,098 - agent.ComputerAgent - INFO - Computer: click({'x': 412, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 412, 'y': 91})\n",
+ "2025-08-11 15:40:07,712 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 750}, {'x': 343, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 750}, {'x': 343, 'y': 741}]})\n",
+ " 5%|██--------------------------------------| 375/7340 [13:46<255:58, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:40:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:08,871 - agent.ComputerAgent - INFO - Computer: click({'x': 143, 'y': 754})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 143, 'y': 754})\n",
+ " 5%|██--------------------------------------| 379/7340 [13:48<253:29, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 380/7340 [13:49<253:05, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 380/7340 [13:50<253:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:12,384 - agent.ComputerAgent - INFO - Computer: type({'text': 'clear cookies'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'clear cookies'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 380/7340 [13:51<253:51, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:13,057 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:40:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 381/7340 [13:52<253:27, 27.5 steps/min]\u001b[92m15:40:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:14,236 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 392, 'y': 422}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 392, 'y': 422}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/df59f155-4e77-49b5-877d-dbd25c77d479/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:14,934 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 381/7340 [13:54<253:56, 27.4 steps/min]2025-08-11 15:40:15,605 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:40:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 382/7340 [13:55<253:32, 27.4 steps/min]2025-08-11 15:40:17,148 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:40:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 382/7340 [13:56<253:54, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:17,777 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:40:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:40:18,425 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 382/7340 [13:57<254:18, 27.4 steps/min]2025-08-11 15:40:19,450 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:40:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:20,841 - agent.ComputerAgent - INFO - Computer: type({'text': 'edge'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'edge'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 382/7340 [14:00<255:02, 27.3 steps/min]2025-08-11 15:40:23,014 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:40:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 383/7340 [14:02<254:59, 27.3 steps/min]2025-08-11 15:40:24,197 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:40:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 383/7340 [14:03<255:20, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 5%|██--------------------------------------| 383/7340 [14:04<255:38, 27.2 steps/min]\u001b[92m15:40:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:26,092 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 503})\n",
+ " 5%|██--------------------------------------| 384/7340 [14:06<255:32, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 5%|██--------------------------------------| 384/7340 [14:07<255:50, 27.2 steps/min]\u001b[92m15:40:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:28,934 - agent.ComputerAgent - INFO - Computer: click({'x': 780, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 780, 'y': 117})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 384/7340 [14:09<256:28, 27.1 steps/min]\u001b[92m15:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:30,883 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:31,543 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ " 5%|██--------------------------------------| 385/7340 [14:10<256:08, 27.2 steps/min]\u001b[92m15:40:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:32,194 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 149})\n",
+ " 5%|██--------------------------------------| 386/7340 [14:11<255:44, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:33,347 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 387/7340 [14:14<255:57, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 387/7340 [14:15<256:15, 27.1 steps/min]2025-08-11 15:40:37,582 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:40:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 387/7340 [14:17<256:45, 27.1 steps/min]\u001b[92m15:40:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:38,942 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:40:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:41,319 - agent.ComputerAgent - INFO - Computer: click({'x': 757, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 757, 'y': 334})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 5%|██--------------------------------------| 387/7340 [14:21<257:53, 27.0 steps/min]\u001b[92m15:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:43,262 - agent.ComputerAgent - INFO - Computer: click({'x': 388, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 388, 'y': 35})\n",
+ "\u001b[92m15:40:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:43,952 - agent.ComputerAgent - INFO - Computer: click({'x': 250, 'y': 543})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 250, 'y': 543})\n",
+ " 5%|██--------------------------------------| 388/7340 [14:23<257:45, 27.0 steps/min]2025-08-11 15:40:44,627 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "\u001b[92m15:40:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:45,273 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 390/7340 [14:24<256:46, 27.1 steps/min]2025-08-11 15:40:45,934 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 79, 'y': 133}, {'x': 343, 'y': 158}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 79, 'y': 133}, {'x': 343, 'y': 158}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 391/7340 [14:26<256:45, 27.1 steps/min]\u001b[92m15:40:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:49,456 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f2'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 392/7340 [14:29<256:48, 27.1 steps/min]\u001b[92m15:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:50,747 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:40:50,747 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ "2025-08-11 15:40:51,401 - agent.ComputerAgent - INFO - Computer: click({'x': 517, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 517, 'y': 100})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 393/7340 [14:31<256:42, 27.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:52,751 - agent.ComputerAgent - INFO - Computer: click({'x': 393, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 393, 'y': 375})\n",
+ "\u001b[92m15:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:40:53,451 - agent.ComputerAgent - INFO - Computer: click({'x': 577, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 577, 'y': 162})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 395/7340 [14:33<255:54, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:54,734 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:40:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:55,782 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:40:56,482 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 397/7340 [14:35<255:17, 27.2 steps/min]\u001b[92m15:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:40:57,571 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:00,028 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 652, 'y': 321}, {'x': 515, 'y': 392}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 652, 'y': 321}, {'x': 515, 'y': 392}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 397/7340 [14:39<256:27, 27.1 steps/min]\u001b[92m15:41:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:01,353 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:02,045 - agent.ComputerAgent - INFO - Computer: click({'x': 54, 'y': 66})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 54, 'y': 66})\n",
+ "\u001b[92m15:41:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 398/7340 [14:41<256:10, 27.1 steps/min]2025-08-11 15:41:02,680 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 287})\n",
+ "2025-08-11 15:41:03,321 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:41:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 5%|██--------------------------------------| 399/7340 [14:42<255:53, 27.1 steps/min]2025-08-11 15:41:04,002 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:41:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:41:04,673 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:41:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 400/7340 [14:44<255:47, 27.1 steps/min]\u001b[92m15:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:05,969 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:07,032 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/957be305-e777-4c37-b266-57c72f2c3bf8/reset \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 400/7340 [14:46<256:16, 27.1 steps/min]2025-08-11 15:41:07,661 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:08,966 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 5%|██--------------------------------------| 401/7340 [14:48<256:09, 27.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:10,018 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:41:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:11,391 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 5%|██--------------------------------------| 402/7340 [14:51<256:22, 27.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:12,712 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:14,059 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 631, 'scroll_x': 0, 'x': 927, 'y': 308})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 631, 'scroll_x': 0, 'x': 927, 'y': 308})\n",
+ " 5%|██--------------------------------------| 403/7340 [14:53<256:16, 27.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:15,240 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 151})\n",
+ " 6%|██--------------------------------------| 404/7340 [14:54<255:55, 27.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 405/7340 [14:55<255:33, 27.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e9d83ed4-d6d0-46f7-982b-98433769e30b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:17,551 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 405/7340 [14:56<255:55, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7ac3560-cea1-4b97-a59c-4b3038bec6c7/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:16,391 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:41:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 405/7340 [14:58<256:20, 27.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:17,074 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 405/7340 [14:59<256:37, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 405/7340 [15:00<256:54, 27.0 steps/min]2025-08-11 15:41:19,763 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:41:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 405/7340 [15:01<257:17, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:41:20,435 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 405/7340 [15:02<257:34, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 405/7340 [15:03<257:51, 26.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 6%|██--------------------------------------| 405/7340 [15:04<258:09, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]\u001b[92m15:41:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 405/7340 [15:05<258:26, 26.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]6.7 steps/min]\n",
+ " 6%|██--------------------------------------| 405/7340 [15:09<259:40, 26.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 6%|██--------------------------------------| 405/7340 [15:10<259:57, 26.7 steps/min]\u001b[92m15:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:29,725 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "\u001b[92m15:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:30,385 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 432})\n",
+ " 6%|██--------------------------------------| 405/7340 [15:12<260:18, 26.6 steps/min]\u001b[92m15:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:31,037 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 503, 'y': 392}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 503, 'y': 392}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:32,369 - agent.ComputerAgent - INFO - Computer: type({'text': 'Ama Dablam.jpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Ama Dablam.jpg'})\n",
+ " 6%|██--------------------------------------| 409/7340 [15:17<259:01, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:36,612 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 409/7340 [15:19<259:33, 26.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:38,589 - agent.ComputerAgent - INFO - Computer: type({'text': 'iPhone 15 Pro Max vs iPhone 14 Pro Max vs iPhone 13 Pro Max specs comparison'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'iPhone 15 Pro Max vs iPhone 14 Pro Max vs iPhone 13 Pro Max specs comparison'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 409/7340 [15:22<260:29, 26.6 steps/min]\u001b[92m15:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:41,954 - agent.ComputerAgent - INFO - Computer: type({'text': 'Default Applications'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Default Applications'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:42,594 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 410/7340 [15:25<260:46, 26.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:44,569 - agent.ComputerAgent - INFO - Computer: click({'x': 250, 'y': 543})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 250, 'y': 543})\n",
+ "2025-08-11 15:41:45,222 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 149})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:41:45,893 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 166}, {'x': 169, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 166}, {'x': 169, 'y': 741}]})\n",
+ " 6%|██--------------------------------------| 412/7340 [15:27<259:58, 26.6 steps/min]2025-08-11 15:41:46,540 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:41:47,223 - agent.ComputerAgent - INFO - Computer: click({'x': 814, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 814, 'y': 189})\n",
+ "2025-08-11 15:41:47,861 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 163})\n",
+ " 6%|██--------------------------------------| 415/7340 [15:29<258:31, 26.8 steps/min]2025-08-11 15:41:48,544 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:41:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 417/7340 [15:30<257:29, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:50,384 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 6%|██--------------------------------------| 417/7340 [15:32<257:54, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:51,554 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 417/7340 [15:33<258:14, 26.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0b1cfd32-0cbc-48e7-890d-9ec0ac043035/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:52,903 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 417/7340 [15:34<258:38, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:53,554 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:54,218 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 417/7340 [15:36<258:59, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:41:54,883 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 417/7340 [15:37<259:21, 26.7 steps/min]2025-08-11 15:41:56,210 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:41:56,868 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 417/7340 [15:38<259:43, 26.7 steps/min]2025-08-11 15:41:58,054 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 417/7340 [15:40<260:13, 26.6 steps/min]\u001b[92m15:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 417/7340 [15:41<260:30, 26.6 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]6.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:03,805 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 6%|██--------------------------------------| 418/7340 [15:46<261:14, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m15:42:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]6.5 steps/min]\n",
+ " 6%|██--------------------------------------| 418/7340 [15:48<261:49, 26.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:07,934 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:42:07,935 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 662, 'x': 88, 'y': 132})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 662, 'x': 88, 'y': 132})\n",
+ " 6%|██--------------------------------------| 419/7340 [15:49<261:26, 26.5 steps/min]\u001b[92m15:42:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:08,590 - agent.ComputerAgent - INFO - Computer: click({'x': 517, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 517, 'y': 101})\n",
+ "\u001b[92m15:42:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:42:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 6%|██--------------------------------------| 419/7340 [15:50<261:43, 26.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:09,247 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m15:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:09,949 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 518, 'y': 392}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 518, 'y': 392}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 420/7340 [15:51<261:19, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:42:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 422/7340 [15:53<260:34, 26.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:12,605 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:13,305 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 546})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 546})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 422/7340 [15:55<260:56, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:13,957 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m15:42:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:42:14,679 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 509, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 509, 'y': 400})\n",
+ "\u001b[92m15:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 423/7340 [15:57<260:51, 26.5 steps/min]2025-08-11 15:42:15,992 - agent.ComputerAgent - INFO - Computer: click({'x': 479, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 479, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 425/7340 [15:58<259:49, 26.6 steps/min]\u001b[92m15:42:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:17,152 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfefeec4-603f-4657-b0fe-7a641734693c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 426/7340 [15:59<259:33, 26.6 steps/min]2025-08-11 15:42:18,516 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m15:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:42:19,205 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:42:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 427/7340 [16:02<259:34, 26.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:42:21,773 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:42:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 428/7340 [16:03<259:20, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 428/7340 [16:04<259:36, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69393c41-bcaa-4752-9a82-e3b105fae459/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:24,493 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m15:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:25,887 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 164})\n",
+ " 6%|██--------------------------------------| 428/7340 [16:07<260:26, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 429/7340 [16:08<260:03, 26.6 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]6.6 steps/min]2025-08-11 15:42:28,767 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:42:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:29,413 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:42:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:42:30,278 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:42:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 430/7340 [16:12<260:20, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 430/7340 [16:13<260:36, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.17s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ " 6%|██--------------------------------------| 431/7340 [16:14<260:26, 26.5 steps/min]\u001b[92m15:42:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:34,679 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 431/7340 [16:16<260:51, 26.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:35,374 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 380})\n",
+ "\u001b[92m15:42:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:42:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:36,024 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 76})\n",
+ "2025-08-11 15:42:36,677 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 6%|██--------------------------------------| 432/7340 [16:18<260:45, 26.5 steps/min]2025-08-11 15:42:37,340 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:38,678 - agent.ComputerAgent - INFO - Computer: type({'text': 'autocreate-python'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'autocreate-python'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 435/7340 [16:20<259:22, 26.6 steps/min]2025-08-11 15:42:39,340 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m15:42:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8a8f1594-3659-4132-9059-6fa366033df0/reset \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 436/7340 [16:22<259:16, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:41,505 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 437/7340 [16:23<258:54, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:43,149 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 437/7340 [16:24<259:18, 26.6 steps/min]2025-08-11 15:42:43,800 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:44,451 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:42:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 437/7340 [16:26<259:38, 26.6 steps/min]2025-08-11 15:42:45,155 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:42:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 6%|██--------------------------------------| 438/7340 [16:27<259:16, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 6%|██--------------------------------------| 438/7340 [16:28<259:32, 26.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:46,998 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:48,189 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 503, 'y': 392}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 654, 'y': 321}, {'x': 503, 'y': 392}]})\n",
+ " 6%|██--------------------------------------| 438/7340 [16:29<259:58, 26.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 442/7340 [16:30<257:44, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d349f43-6c63-4144-9bd3-bbd16183b16d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:42:51,287 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 442/7340 [16:33<258:17, 26.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:42:51,928 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 442/7340 [16:34<258:33, 26.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 6%|██--------------------------------------| 443/7340 [16:35<258:11, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:42:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:42:54,970 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 443/7340 [16:37<258:42, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.15s/it]6.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]\n",
+ "\u001b[92m15:42:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 444/7340 [16:39<258:39, 26.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:58,294 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 444/7340 [16:40<258:55, 26.6 steps/min]\u001b[92m15:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:42:59,020 - agent.ComputerAgent - INFO - Computer: click({'x': 251, 'y': 544})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 251, 'y': 544})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:00,381 - agent.ComputerAgent - INFO - Computer: type({'text': ' 0'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' 0'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:43:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d197f4f-b7b0-4196-9681-135d7bc3a45b/close \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 445/7340 [16:44<259:18, 26.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:03,011 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:43:03,012 - agent.ComputerAgent - INFO - Computer: move({'x': 856, 'y': 414})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 856, 'y': 414})\n",
+ "2025-08-11 15:43:03,666 - agent.ComputerAgent - INFO - Computer: click({'x': 356, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 356, 'y': 559})\n",
+ "\u001b[92m15:43:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:05,015 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:43:05,017 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ " 6%|██--------------------------------------| 448/7340 [16:46<258:07, 26.7 steps/min]\u001b[92m15:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:05,697 - agent.ComputerAgent - INFO - Computer: click({'x': 958, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 958, 'y': 128})\n",
+ "2025-08-11 15:43:06,364 - agent.ComputerAgent - INFO - Computer: click({'x': 395, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 395, 'y': 75})\n",
+ " 6%|██--------------------------------------| 451/7340 [16:48<256:38, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 6%|██--------------------------------------| 453/7340 [16:49<255:41, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d9da005-d40d-4335-86ec-275c2ec5665b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]2025-08-11 15:43:09,051 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:43:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:10,518 - agent.ComputerAgent - INFO - Computer: type({'text': 'logo.svg'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'logo.svg'})\n",
+ " 6%|██--------------------------------------| 453/7340 [16:52<256:29, 26.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 454/7340 [16:53<256:08, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 15:43:12,734 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:43:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 455/7340 [16:54<255:51, 26.9 steps/min]2025-08-11 15:43:13,415 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:14,116 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 455/7340 [16:55<256:11, 26.9 steps/min]2025-08-11 15:43:15,040 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 455/7340 [16:56<256:26, 26.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:15,711 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:43:16,377 - agent.ComputerAgent - INFO - Computer: click({'x': 218, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 218, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 456/7340 [16:58<256:09, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:17,394 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:43:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 457/7340 [16:59<255:49, 26.9 steps/min]2025-08-11 15:43:18,038 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:18,717 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 457/7340 [17:00<256:09, 26.9 steps/min]2025-08-11 15:43:19,375 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 457/7340 [17:01<256:24, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/edaeedb6-9993-4b6f-b226-19e2768a5736/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 6%|██--------------------------------------| 458/7340 [17:02<256:03, 26.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:21,740 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:43:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 458/7340 [17:03<256:18, 26.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:43:22,399 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 458/7340 [17:04<256:33, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:43:24,187 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 459/7340 [17:05<256:20, 26.8 steps/min]2025-08-11 15:43:24,859 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]6.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:27,064 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 459/7340 [17:08<257:02, 26.8 steps/min]2025-08-11 15:43:27,748 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:43:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]6.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]6.8 steps/min]\n",
+ " 6%|██--------------------------------------| 461/7340 [17:11<256:37, 26.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:30,732 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:31,448 - agent.ComputerAgent - INFO - Computer: click({'x': 77, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 77, 'y': 157})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:32,775 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:43:32,776 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ " 6%|██--------------------------------------| 461/7340 [17:14<257:16, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 464/7340 [17:15<255:50, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:35,369 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:43:35,370 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m15:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 464/7340 [17:17<256:08, 26.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:36,050 - agent.ComputerAgent - INFO - Computer: click({'x': 946, 'y': 751})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 946, 'y': 751})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:43:36,737 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:43:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:43:37,434 - agent.ComputerAgent - INFO - Computer: click({'x': 26, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 26, 'y': 10})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 465/7340 [17:19<256:04, 26.8 steps/min]2025-08-11 15:43:38,120 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:43:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:43:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 467/7340 [17:21<255:24, 26.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:40,050 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:40,720 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -636, 'x': 520, 'y': 359})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -636, 'x': 520, 'y': 359})\n",
+ "\u001b[92m15:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 469/7340 [17:22<254:32, 27.0 steps/min]2025-08-11 15:43:41,384 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 927, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 927, 'y': 529})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 469/7340 [17:23<254:52, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:43:42,718 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:43,378 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 596})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 596})\n",
+ "2025-08-11 15:43:44,004 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:43:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 470/7340 [17:25<254:45, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:43:45,318 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 471/7340 [17:27<254:30, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:46,498 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 214})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 471/7340 [17:28<254:47, 27.0 steps/min]2025-08-11 15:43:47,146 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:43:47,799 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 472/7340 [17:29<254:31, 27.0 steps/min]2025-08-11 15:43:48,472 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:43:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 472/7340 [17:30<254:46, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 473/7340 [17:31<254:31, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:50,793 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:52,129 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "2025-08-11 15:43:52,796 - agent.ComputerAgent - INFO - Computer: click({'x': 642, 'y': 498})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 642, 'y': 498})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/957be305-e777-4c37-b266-57c72f2c3bf8/close \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 473/7340 [17:35<255:20, 26.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:43:54,798 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:43:54,799 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 642})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 642})\n",
+ " 6%|██--------------------------------------| 475/7340 [17:36<254:29, 27.0 steps/min]2025-08-11 15:43:55,478 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 476/7340 [17:37<254:09, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:56,177 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:43:56,848 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 476/7340 [17:38<254:25, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:43:58,729 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 477/7340 [17:40<254:17, 27.0 steps/min]2025-08-11 15:43:59,375 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:00,697 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:44:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 6%|██--------------------------------------| 477/7340 [17:42<254:46, 26.9 steps/min]2025-08-11 15:44:01,396 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:44:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 6%|██--------------------------------------| 477/7340 [17:43<255:06, 26.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:44:03,241 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 6%|██--------------------------------------| 477/7340 [17:44<255:22, 26.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]\u001b[92m15:44:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 478/7340 [17:46<255:06, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/reset \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 478/7340 [17:47<255:20, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:06,068 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:06,720 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]6.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 15:44:08,059 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:44:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 478/7340 [17:50<256:07, 26.8 steps/min]\u001b[92m15:44:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:10,432 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 479/7340 [17:52<256:07, 26.8 steps/min]2025-08-11 15:44:11,792 - agent.ComputerAgent - INFO - Computer: click({'x': 661, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 661, 'y': 339})\n",
+ "\u001b[92m15:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:12,470 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:44:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:44:13,120 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:44:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:44:13,820 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 751})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 751})\n",
+ "2025-08-11 15:44:14,519 - agent.ComputerAgent - INFO - Computer: click({'x': 79, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 79, 'y': 157})\n",
+ "\u001b[92m15:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:15,799 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:44:15,800 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 7%|██--------------------------------------| 480/7340 [17:57<256:39, 26.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:17,166 - agent.ComputerAgent - INFO - Computer: click({'x': 288, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 288, 'y': 148})\n",
+ " 7%|██--------------------------------------| 484/7340 [17:58<254:43, 26.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:18,497 - agent.ComputerAgent - INFO - Computer: double_click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 15, 'y': 284})\n",
+ "\u001b[92m15:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 485/7340 [18:00<254:27, 26.9 steps/min]2025-08-11 15:44:19,180 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 237})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:44:19,861 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:44:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 487/7340 [18:01<253:40, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:21,733 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:44:21,733 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 488/7340 [18:03<253:32, 27.0 steps/min]2025-08-11 15:44:22,737 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 489/7340 [18:04<253:14, 27.1 steps/min]2025-08-11 15:44:23,429 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:44:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:24,755 - agent.ComputerAgent - INFO - Agent: Here are names similar to Carl:\n",
+ "- Henry\n",
+ "- Charles\n",
+ "- Mason\n",
+ "- Owen\n",
+ "- Jack\n",
+ "- Calvin\n",
+ "- Daniel\n",
+ "- James\n",
+ "- Lucas\n",
+ "- Noah\n",
+ "- Theodore\n",
+ "- Arthur\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Here are names similar to Carl:\n",
+ "- Henry\n",
+ "- Charles\n",
+ "- Mason\n",
+ "- Owen\n",
+ "- Jack\n",
+ "- Calvin\n",
+ "- Daniel\n",
+ "- James\n",
+ "- Lucas\n",
+ "- Noah\n",
+ "- Theodore\n",
+ "- Arthur\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 15:44:25,398 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 694\n",
+ " - prompt_tokens: 12186\n",
+ " - total_tokens: 12880\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0222\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 694\n",
+ " - prompt_tokens: 12186\n",
+ " - total_tokens: 12880\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0222\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:27,057 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 490/7340 [18:09<253:50, 27.0 steps/min]\u001b[92m15:44:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:28,415 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:44:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:44:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:29,809 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 76})\n",
+ "2025-08-11 15:44:30,462 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 492/7340 [18:12<253:22, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:31,126 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:44:31,818 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 493/7340 [18:13<253:08, 27.0 steps/min]2025-08-11 15:44:32,484 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 242, 'y': 488})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 242, 'y': 488})\n",
+ "2025-08-11 15:44:33,115 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 493/7340 [18:14<253:26, 27.0 steps/min]2025-08-11 15:44:34,189 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:44:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 494/7340 [18:15<253:07, 27.0 steps/min]2025-08-11 15:44:34,824 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:44:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:35,504 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:44:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 494/7340 [18:17<253:25, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 494/7340 [18:18<253:39, 27.0 steps/min]2025-08-11 15:44:37,200 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 494/7340 [18:19<253:53, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 494/7340 [18:20<254:14, 26.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:39,709 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:40,380 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 416})\n",
+ " 7%|██--------------------------------------| 495/7340 [18:22<254:00, 26.9 steps/min]\u001b[92m15:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:41,029 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 32})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:41,692 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:44:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:44:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 496/7340 [18:24<253:55, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:43,008 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:44:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:43,719 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:44:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:44:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 497/7340 [18:26<253:50, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:45,019 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 562})\n",
+ "\u001b[92m15:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:45,670 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:44:45,670 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 274})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 274})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 497/7340 [18:28<254:26, 26.9 steps/min]\u001b[92m15:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:44:48,335 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 531})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 531})\n",
+ "2025-08-11 15:44:48,979 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 499/7340 [18:30<253:47, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:49,672 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ "2025-08-11 15:44:50,370 - agent.ComputerAgent - INFO - Computer: move({'x': 323, 'y': 282})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 323, 'y': 282})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 501/7340 [18:32<253:00, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 507/7340 [18:33<250:01, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:52,023 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:44:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e1e61614-8290-4d90-9feb-594d2a7199e8/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 507/7340 [18:34<250:19, 27.3 steps/min]2025-08-11 15:44:53,321 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 507/7340 [18:35<250:32, 27.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 507/7340 [18:36<250:46, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:55,524 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:44:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:44:56,190 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 508/7340 [18:38<250:44, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:44:57,550 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:44:58,297 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 508/7340 [18:40<251:03, 27.2 steps/min]2025-08-11 15:44:58,949 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:44:59,835 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]\u001b[92m15:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 508/7340 [18:41<251:23, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:45:01,429 - agent.ComputerAgent - INFO - Computer: type({'text': 'Settings'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'Settings'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 7%|██--------------------------------------| 510/7340 [18:44<250:55, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 15:45:03,653 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:45:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 510/7340 [18:45<251:11, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:05,800 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 510/7340 [18:48<251:49, 27.1 steps/min]\u001b[92m15:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:07,132 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 577})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:45:07,802 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 7%|██--------------------------------------| 511/7340 [18:49<251:36, 27.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:08,499 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:09,172 - agent.ComputerAgent - INFO - Computer: double_click({'x': 17, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 17, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 512/7340 [18:52<251:40, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 513/7340 [18:53<251:26, 27.2 steps/min]\u001b[92m15:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:12,518 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 562})\n",
+ "\u001b[92m15:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:13,190 - agent.ComputerAgent - INFO - Computer: click({'x': 481, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 481, 'y': 433})\n",
+ "\u001b[92m15:45:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 513/7340 [18:56<252:01, 27.1 steps/min]\u001b[92m15:45:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:15,144 - agent.ComputerAgent - INFO - Computer: click({'x': 428, 'y': 216})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 428, 'y': 216})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:16,498 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 193})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 515/7340 [18:58<251:24, 27.1 steps/min]\u001b[92m15:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:17,139 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:17,822 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 206})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:19,201 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 644})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 644})\n",
+ "\u001b[92m15:45:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 517/7340 [19:01<251:06, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:20,554 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 599})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 599})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:45:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:21,925 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 35})\n",
+ "\u001b[92m15:45:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 519/7340 [19:03<250:30, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:22,561 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 597})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 597})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:23,224 - agent.ComputerAgent - INFO - Computer: click({'x': 401, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 401, 'y': 324})\n",
+ " 7%|██--------------------------------------| 522/7340 [19:04<249:14, 27.4 steps/min]2025-08-11 15:45:23,891 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:45:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:24,559 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:45:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 524/7340 [19:06<248:30, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 524/7340 [19:07<248:43, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 524/7340 [19:08<248:57, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/63010886-f715-4208-aef0-b98c456e7e98/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 524/7340 [19:09<249:10, 27.4 steps/min]2025-08-11 15:45:27,970 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:45:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:45:28,630 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:45:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 524/7340 [19:10<249:24, 27.3 steps/min]2025-08-11 15:45:29,291 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:29,939 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:30,622 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 524/7340 [19:12<249:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:45:31,285 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:31,980 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:32,674 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 524/7340 [19:14<250:16, 27.2 steps/min]2025-08-11 15:45:33,366 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:45:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:34,015 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:45:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 524/7340 [19:17<250:50, 27.2 steps/min]\u001b[92m15:45:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 524/7340 [19:19<251:17, 27.1 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]7.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]\u001b[92m15:45:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 524/7340 [19:22<252:05, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]7.0 steps/min]\n",
+ " 7%|██--------------------------------------| 524/7340 [19:25<252:44, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:45,090 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 19, 'y': 45})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 19, 'y': 45})\n",
+ " 7%|██--------------------------------------| 524/7340 [19:26<252:57, 26.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:46,437 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 525/7340 [19:28<252:53, 26.9 steps/min]2025-08-11 15:45:47,736 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 53})\n",
+ "2025-08-11 15:45:48,396 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 167})\n",
+ "2025-08-11 15:45:49,085 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 563})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 563})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:49,725 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:45:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:51,720 - agent.ComputerAgent - INFO - Agent: The Vignette filter window is now open in GIMP, ready for adjustments. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The Vignette filter window is now open in GIMP, ready for adjustments. Task completed.\n",
+ "2025-08-11 15:45:52,353 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 221\n",
+ " - prompt_tokens: 6007\n",
+ " - total_tokens: 6228\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 221\n",
+ " - prompt_tokens: 6007\n",
+ " - total_tokens: 6228\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/reset \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 526/7340 [19:34<253:29, 26.9 steps/min]2025-08-11 15:45:53,028 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 597})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 597})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 529/7340 [19:35<252:14, 27.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:45:54,386 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 13})\n",
+ "\u001b[92m15:45:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:45:55,053 - agent.ComputerAgent - INFO - Computer: click({'x': 412, 'y': 350})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 412, 'y': 350})\n",
+ " 7%|██--------------------------------------| 530/7340 [19:36<252:00, 27.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 532/7340 [19:37<251:11, 27.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:45:57,958 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 532/7340 [19:39<251:36, 27.1 steps/min]2025-08-11 15:45:58,598 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:45:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:45:59,261 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:45:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 532/7340 [19:41<251:54, 27.0 steps/min]2025-08-11 15:46:00,279 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:01,601 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:46:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 545/7340 [19:43<245:53, 27.6 steps/min]2025-08-11 15:46:02,249 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:46:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:46:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:46:03,641 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "\u001b[92m15:46:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 545/7340 [19:46<246:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:04,921 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:05,561 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 7%|██--------------------------------------| 545/7340 [19:47<246:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:06,256 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 153}, {'x': 342, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 153}, {'x': 342, 'y': 741}]})\n",
+ "\u001b[92m15:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:06,911 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 318})\n",
+ "2025-08-11 15:46:07,561 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f2f2bd2-c1f8-49b6-9b0b-495e746cef64/close \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 545/7340 [19:49<247:08, 27.5 steps/min]2025-08-11 15:46:08,870 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:46:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 547/7340 [19:50<246:26, 27.6 steps/min]2025-08-11 15:46:10,285 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:46:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 547/7340 [19:52<246:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 7%|██--------------------------------------| 547/7340 [19:53<246:56, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:46:12,262 - agent.ComputerAgent - INFO - Computer: type({'text': 'software'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'software'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]\u001b[92m15:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 547/7340 [19:55<247:25, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 15:46:15,286 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]7.5 steps/min]2025-08-11 15:46:16,089 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]7.5 steps/min]\n",
+ "2025-08-11 15:46:18,231 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 7%|██--------------------------------------| 549/7340 [20:00<247:24, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:19,430 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 149})\n",
+ " 7%|██--------------------------------------| 549/7340 [20:01<247:38, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:21,092 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m15:46:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 7%|██--------------------------------------| 550/7340 [20:02<247:30, 27.4 steps/min]2025-08-11 15:46:21,777 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:46:21,777 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 534})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 534})\n",
+ "2025-08-11 15:46:22,477 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 94})\n",
+ "2025-08-11 15:46:23,108 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:46:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:46:23,770 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:46:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:46:25,109 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "2025-08-11 15:46:25,767 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 7%|██--------------------------------------| 550/7340 [20:07<248:27, 27.3 steps/min]\u001b[92m15:46:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:27,495 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:46:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:46:28,161 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 552/7340 [20:09<247:58, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:29,235 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:46:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 552/7340 [20:10<248:11, 27.3 steps/min]\u001b[92m15:46:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:29,923 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 457})\n",
+ " 8%|███-------------------------------------| 553/7340 [20:14<248:19, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 553/7340 [20:15<248:36, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:34,904 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 544})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 544})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 553/7340 [20:16<248:51, 27.3 steps/min]2025-08-11 15:46:35,550 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 554/7340 [20:17<248:39, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:36,892 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:46:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:46:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:37,557 - agent.ComputerAgent - INFO - Computer: click({'x': 79, 'y': 159})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 79, 'y': 159})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 554/7340 [20:20<249:03, 27.2 steps/min]\u001b[92m15:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:38,941 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:40,284 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 555/7340 [20:25<249:39, 27.2 steps/min]\u001b[92m15:46:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:44,291 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:44,962 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:46:44,963 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 81})\n",
+ "\u001b[92m15:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:46:45,628 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:46:45,629 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ " 8%|███-------------------------------------| 555/7340 [20:27<250:04, 27.1 steps/min]2025-08-11 15:46:46,270 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 14, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 14, 'y': 527})\n",
+ "2025-08-11 15:46:46,930 - agent.ComputerAgent - INFO - Computer: click({'x': 824, 'y': 263})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 824, 'y': 263})\n",
+ "\u001b[92m15:46:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:46:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:47,597 - agent.ComputerAgent - INFO - Computer: click({'x': 371, 'y': 598})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 371, 'y': 598})\n",
+ " 8%|███-------------------------------------| 557/7340 [20:29<249:30, 27.2 steps/min]2025-08-11 15:46:48,243 - agent.ComputerAgent - INFO - Computer: double_click({'x': 469, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 469, 'y': 199})\n",
+ "2025-08-11 15:46:48,875 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 560/7340 [20:30<248:19, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 561/7340 [20:31<248:04, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:46:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:46:51,226 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 122})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 122})\n",
+ " 8%|███-------------------------------------| 561/7340 [20:32<248:18, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:46:53,036 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 562/7340 [20:34<248:11, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:46:54,192 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:46:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae9871c0-5cb9-4c5b-9c02-c899819f9f81/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 564/7340 [20:36<247:36, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:46:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 564/7340 [20:37<247:49, 27.3 steps/min]2025-08-11 15:46:56,592 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:46:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:46:57,264 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:46:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:46:57,922 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 564/7340 [20:40<248:21, 27.3 steps/min]2025-08-11 15:46:59,223 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:46:59,882 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 565/7340 [20:41<248:08, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:00,560 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:47:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:01,587 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:47:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/982f8f16-b578-409f-8388-d8d5ee68ccee/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 565/7340 [20:44<248:45, 27.2 steps/min]\u001b[92m15:47:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]2025-08-11 15:47:04,413 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 565/7340 [20:46<249:02, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:05,831 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:47:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 565/7340 [20:48<249:30, 27.2 steps/min]\u001b[92m15:47:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ " 8%|███-------------------------------------| 565/7340 [20:49<249:42, 27.1 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 8%|███-------------------------------------| 565/7340 [20:50<249:54, 27.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.71s/it]7.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:13,165 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo snap install spotify'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.64s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'sudo snap install spotify'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]7.0 steps/min]\n",
+ "\u001b[92m15:47:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:14,555 - agent.ComputerAgent - INFO - Computer: click({'x': 394, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 394, 'y': 95})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 8%|███-------------------------------------| 566/7340 [20:56<250:43, 27.0 steps/min]\u001b[92m15:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:15,927 - agent.ComputerAgent - INFO - Computer: click({'x': 472, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 472, 'y': 206})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:47:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:17,296 - agent.ComputerAgent - INFO - Computer: click({'x': 897, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 897, 'y': 168})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 567/7340 [20:59<250:47, 27.0 steps/min]\u001b[92m15:47:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:18,658 - agent.ComputerAgent - INFO - Computer: click({'x': 619, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 619, 'y': 133})\n",
+ "2025-08-11 15:47:19,320 - agent.ComputerAgent - INFO - Computer: click({'x': 345, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 345, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 569/7340 [21:02<250:22, 27.0 steps/min]\u001b[92m15:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:21,315 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 250})\n",
+ "\u001b[92m15:47:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:21,984 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 35})\n",
+ "\u001b[92m15:47:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 571/7340 [21:03<249:40, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:22,669 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 237})\n",
+ "\u001b[92m15:47:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:23,353 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 457})\n",
+ " 8%|███-------------------------------------| 573/7340 [21:05<249:00, 27.2 steps/min]2025-08-11 15:47:24,029 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:47:24,711 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 575/7340 [21:06<248:20, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 575/7340 [21:07<248:33, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:27,048 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 713})\n",
+ " 8%|███-------------------------------------| 575/7340 [21:08<248:47, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:28,212 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:47:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 576/7340 [21:09<248:33, 27.2 steps/min]2025-08-11 15:47:28,882 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:47:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:47:29,551 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:47:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 576/7340 [21:11<248:50, 27.2 steps/min]2025-08-11 15:47:30,210 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:47:30,857 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 576/7340 [21:12<249:04, 27.2 steps/min]2025-08-11 15:47:31,498 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:33,198 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 576/7340 [21:14<249:31, 27.1 steps/min]2025-08-11 15:47:33,872 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:47:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:47:34,526 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:47:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:47:35,212 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:47:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 576/7340 [21:16<249:55, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:36,549 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 8%|███-------------------------------------| 576/7340 [21:18<250:10, 27.0 steps/min]2025-08-11 15:47:37,201 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:47:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 583/7340 [21:19<247:11, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:39,042 - agent.ComputerAgent - INFO - Computer: click({'x': 826, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 826, 'y': 36})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f1593044-fc61-4fc8-b29d-87e37914d5c2/close \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 584/7340 [21:21<247:08, 27.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 584/7340 [21:23<247:25, 27.3 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 8%|███-------------------------------------| 584/7340 [21:24<247:37, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]\u001b[92m15:47:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 584/7340 [21:25<247:49, 27.3 steps/min]2025-08-11 15:47:44,506 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:47:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]7.2 steps/min]2025-08-11 15:47:45,196 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 584/7340 [21:28<248:25, 27.2 steps/min]\u001b[92m15:47:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 15:47:48,275 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 8%|███-------------------------------------| 584/7340 [21:30<248:43, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:49,513 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 584/7340 [21:31<248:57, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:50,187 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:47:50,188 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 589})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 589})\n",
+ "\u001b[92m15:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:50,907 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:52,217 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ " 8%|███-------------------------------------| 584/7340 [21:33<249:28, 27.1 steps/min]2025-08-11 15:47:52,853 - agent.ComputerAgent - INFO - Computer: click({'x': 935, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 935, 'y': 64})\n",
+ "\u001b[92m15:47:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:47:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:53,527 - agent.ComputerAgent - INFO - Computer: click({'x': 133, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 133, 'y': 739})\n",
+ "2025-08-11 15:47:54,182 - agent.ComputerAgent - INFO - Computer: click({'x': 750, 'y': 266})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 750, 'y': 266})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:47:56,141 - agent.ComputerAgent - INFO - Computer: type({'text': 'focus editor on breakpoint'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'focus editor on breakpoint'})\n",
+ " 8%|███-------------------------------------| 587/7340 [21:37<248:50, 27.1 steps/min]\u001b[92m15:47:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:47:56,772 - agent.ComputerAgent - INFO - Computer: click({'x': 538, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 538, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/reset \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 592/7340 [21:39<246:56, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:47:59,473 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:47:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 592/7340 [21:41<247:11, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 592/7340 [21:42<247:23, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 592/7340 [21:43<247:34, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:02,535 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m15:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 592/7340 [21:44<247:46, 27.2 steps/min]2025-08-11 15:48:03,554 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 239})\n",
+ "2025-08-11 15:48:04,180 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 593/7340 [21:45<247:38, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:04,872 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:05,544 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 594/7340 [21:47<247:34, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:06,827 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 594/7340 [21:49<247:49, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:08,163 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:08,794 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:09,480 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 335})\n",
+ "\u001b[92m15:48:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 594/7340 [21:51<248:11, 27.2 steps/min]2025-08-11 15:48:10,173 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:48:10,174 - agent.ComputerAgent - INFO - Computer: click({'x': 45, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 45, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 595/7340 [21:52<247:59, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:11,501 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:12,193 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 375})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 596/7340 [21:53<247:47, 27.2 steps/min]2025-08-11 15:48:12,854 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 8%|███-------------------------------------| 597/7340 [21:55<247:43, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 597/7340 [21:56<247:54, 27.2 steps/min]2025-08-11 15:48:15,565 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:48:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0a6ee00b-4e8c-4a3f-bac1-9baec4d920a2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 597/7340 [21:58<248:07, 27.2 steps/min]2025-08-11 15:48:16,943 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:48:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 597/7340 [21:59<248:18, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/10d6b265-637e-4165-a458-35932682a0af/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:18,112 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:48:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 8%|███-------------------------------------| 597/7340 [22:01<248:42, 27.1 steps/min]\u001b[92m15:48:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:20,134 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:48:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]2025-08-11 15:48:21,021 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:48:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:21,698 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:48:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 597/7340 [22:05<249:25, 27.0 steps/min]\u001b[92m15:48:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]\u001b[92m15:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "\u001b[92m15:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 597/7340 [22:07<249:49, 27.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:26,988 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m15:48:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 8%|███-------------------------------------| 597/7340 [22:09<250:15, 26.9 steps/min]\u001b[92m15:48:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:28,327 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:29,661 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 168})\n",
+ "2025-08-11 15:48:30,318 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:48:30,320 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 654})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 654})\n",
+ " 8%|███-------------------------------------| 597/7340 [22:12<250:45, 26.9 steps/min]\u001b[92m15:48:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:30,985 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:31,635 - agent.ComputerAgent - INFO - Computer: click({'x': 741, 'y': 297})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 741, 'y': 297})\n",
+ "\u001b[92m15:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:32,292 - agent.ComputerAgent - INFO - Computer: click({'x': 577, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 577, 'y': 429})\n",
+ "2025-08-11 15:48:32,958 - agent.ComputerAgent - INFO - Computer: click({'x': 133, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 133, 'y': 732})\n",
+ "\u001b[92m15:48:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:34,225 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:48:34,226 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "2025-08-11 15:48:34,928 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 756})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 756})\n",
+ "2025-08-11 15:48:35,593 - agent.ComputerAgent - INFO - Computer: click({'x': 673, 'y': 396})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 673, 'y': 396})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 600/7340 [22:18<250:30, 26.9 steps/min]\u001b[92m15:48:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:36,958 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:48:36,959 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 606/7340 [22:20<248:11, 27.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:48:38,975 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 577})\n",
+ "\u001b[92m15:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:39,623 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 52})\n",
+ "\u001b[92m15:48:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 607/7340 [22:21<247:58, 27.2 steps/min]2025-08-11 15:48:40,246 - agent.ComputerAgent - INFO - Computer: click({'x': 327, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 327, 'y': 271})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:41,591 - agent.ComputerAgent - INFO - Computer: type({'text': 'spotify & disown'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'spotify & disown'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 8%|███-------------------------------------| 609/7340 [22:23<247:27, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:48:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 8%|███-------------------------------------| 611/7340 [22:24<246:47, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:48:43,946 - agent.ComputerAgent - INFO - Computer: click({'x': 977, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 977, 'y': 35})\n",
+ " 8%|███-------------------------------------| 611/7340 [22:25<247:00, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:45,687 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:48:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c31f4b36-5141-403e-9c49-5c747feb3d28/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 629/7340 [22:27<239:36, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:47,030 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:48:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:47,694 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:48:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:48,321 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:48:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/reset \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 629/7340 [22:30<240:04, 28.0 steps/min]2025-08-11 15:48:48,975 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:48:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:49,653 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:48:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:31<240:18, 27.9 steps/min]2025-08-11 15:48:50,710 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:48:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 629/7340 [22:32<240:29, 27.9 steps/min]2025-08-11 15:48:51,396 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:48:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:52,035 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:52,696 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:34<240:51, 27.9 steps/min]2025-08-11 15:48:53,384 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:48:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:54,075 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:48:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:35<241:05, 27.8 steps/min]2025-08-11 15:48:54,713 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:48:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:48:55,400 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:48:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:37<241:20, 27.8 steps/min]2025-08-11 15:48:56,065 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:48:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:48:56,765 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:48:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:38<241:34, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 629/7340 [22:41<242:06, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 629/7340 [22:42<242:17, 27.7 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m15:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 629/7340 [22:43<242:31, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]\u001b[92m15:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 15:49:05,066 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x01'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x01'})\n",
+ " 9%|███-------------------------------------| 629/7340 [22:46<243:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.55s/it]\u001b[92m15:49:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]\n",
+ "2025-08-11 15:49:07,950 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 630/7340 [22:51<243:22, 27.6 steps/min]\u001b[92m15:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:09,935 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:10,589 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 168})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:11,273 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 105})\n",
+ "\u001b[92m15:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 630/7340 [22:53<243:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:11,947 - agent.ComputerAgent - INFO - Computer: click({'x': 557, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 557, 'y': 276})\n",
+ "2025-08-11 15:49:12,632 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 309})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 309})\n",
+ "2025-08-11 15:49:13,323 - agent.ComputerAgent - INFO - Computer: double_click({'x': 813, 'y': 210})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 813, 'y': 210})\n",
+ "2025-08-11 15:49:14,005 - agent.ComputerAgent - INFO - Computer: click({'x': 592, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 592, 'y': 133})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 632/7340 [22:57<243:36, 27.5 steps/min]2025-08-11 15:49:15,985 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:17,282 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+s'})\n",
+ "\u001b[92m15:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:19,337 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 9%|███-------------------------------------| 636/7340 [23:01<242:37, 27.6 steps/min]2025-08-11 15:49:19,983 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 174})\n",
+ "2025-08-11 15:49:20,609 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:49:20,611 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:21,946 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 15:49:22,596 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:23,275 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 9%|███-------------------------------------| 637/7340 [23:05<242:54, 27.6 steps/min]\u001b[92m15:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:23,935 - agent.ComputerAgent - INFO - Computer: click({'x': 676, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 676, 'y': 400})\n",
+ " 9%|███-------------------------------------| 641/7340 [23:08<241:46, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 9%|███-------------------------------------| 641/7340 [23:09<241:56, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:28,289 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ " 9%|███-------------------------------------| 641/7340 [23:10<242:07, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:49:28,949 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 642/7340 [23:11<241:57, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:30,277 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:49:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:30,940 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:49:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:31,606 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:49:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:49:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:32,295 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:33,610 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 9%|███-------------------------------------| 642/7340 [23:15<242:37, 27.6 steps/min]2025-08-11 15:49:34,662 - agent.ComputerAgent - INFO - Computer: click({'x': 544, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 544, 'y': 250})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:49:35,970 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 15:49:36,635 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 9%|███-------------------------------------| 642/7340 [23:18<243:09, 27.5 steps/min]\u001b[92m15:49:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:37,280 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:49:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:37,987 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:49:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 644/7340 [23:19<242:34, 27.6 steps/min]2025-08-11 15:49:38,656 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:49:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:39,327 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:49:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 644/7340 [23:21<242:48, 27.6 steps/min]2025-08-11 15:49:39,973 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:49:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:40,627 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:49:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:49:41,316 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:49:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/reset \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 644/7340 [23:23<243:08, 27.5 steps/min]2025-08-11 15:49:41,943 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:49:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 644/7340 [23:24<243:19, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 644/7340 [23:25<243:29, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:49:44,118 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:49:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:49:44,804 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:49:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 644/7340 [23:26<243:44, 27.5 steps/min]2025-08-11 15:49:45,444 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:49:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:49:47,463 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 9%|███-------------------------------------| 644/7340 [23:29<244:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:48,771 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ "\u001b[92m15:49:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:49,427 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 9%|███-------------------------------------| 644/7340 [23:31<244:32, 27.4 steps/min]\u001b[92m15:49:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:50,075 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 137})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:49:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:53,311 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 9%|███-------------------------------------| 645/7340 [23:35<244:47, 27.3 steps/min]\u001b[92m15:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:55,273 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+ctrl+s'})\n",
+ "2025-08-11 15:49:55,966 - agent.ComputerAgent - INFO - Computer: click({'x': 134, 'y': 738})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 134, 'y': 738})\n",
+ "\u001b[92m15:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:49:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:49:57,281 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 9%|███-------------------------------------| 646/7340 [23:39<245:04, 27.3 steps/min]\u001b[92m15:49:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:49:57,959 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 386})\n",
+ "2025-08-11 15:49:58,645 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 286})\n",
+ "\u001b[92m15:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:49:59,966 - agent.ComputerAgent - INFO - Computer: type({'text': 'auto save delay'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'auto save delay'})\n",
+ "2025-08-11 15:50:00,600 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:01,237 - agent.ComputerAgent - INFO - Computer: click({'x': 530, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 530, 'y': 249})\n",
+ "\u001b[92m15:50:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:02,566 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:50:02,567 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 647/7340 [23:45<245:47, 27.2 steps/min]\u001b[92m15:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:04,543 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 500})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 500})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 9%|███-------------------------------------| 652/7340 [23:46<243:53, 27.4 steps/min]\u001b[92m15:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:05,749 - agent.ComputerAgent - INFO - Computer: click({'x': 672, 'y': 394})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 672, 'y': 394})\n",
+ "\u001b[92m15:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:06,396 - agent.ComputerAgent - INFO - Computer: click({'x': 295, 'y': 29})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 295, 'y': 29})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 653/7340 [23:48<243:51, 27.4 steps/min]\u001b[92m15:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:08,255 - agent.ComputerAgent - INFO - Computer: click({'x': 583, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 583, 'y': 428})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:09,598 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 9%|███-------------------------------------| 655/7340 [23:51<243:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:10,237 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:50:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:10,926 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:50:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:11,568 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:50:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 657/7340 [23:53<243:06, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:12,840 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:13,502 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 203})\n",
+ "2025-08-11 15:50:14,190 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 657/7340 [23:55<243:26, 27.5 steps/min]2025-08-11 15:50:14,881 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:15,548 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:50:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 658/7340 [23:57<243:15, 27.5 steps/min]2025-08-11 15:50:16,256 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:50:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:16,938 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:50:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:17,977 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:50:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 658/7340 [24:00<243:46, 27.4 steps/min]\u001b[92m15:50:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:19,240 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:50:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:19,952 - agent.ComputerAgent - INFO - Computer: click({'x': 755, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 755, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 658/7340 [24:01<244:00, 27.4 steps/min]2025-08-11 15:50:20,602 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:50:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 9%|███-------------------------------------| 659/7340 [24:03<243:55, 27.4 steps/min]\u001b[92m15:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:22,530 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:24,500 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "2025-08-11 15:50:25,170 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 309})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 309})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 9%|███-------------------------------------| 659/7340 [24:07<244:35, 27.3 steps/min]\u001b[92m15:50:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:26,470 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:50:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:27,149 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 249})\n",
+ "\u001b[92m15:50:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:28,158 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 386})\n",
+ " 9%|███-------------------------------------| 660/7340 [24:09<244:34, 27.3 steps/min]2025-08-11 15:50:28,807 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:50:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:29,463 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 20, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 20, 'y': 142})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:30,800 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 662/7340 [24:13<244:26, 27.3 steps/min]\u001b[92m15:50:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:32,768 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:50:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:33,435 - agent.ComputerAgent - INFO - Computer: click({'x': 449, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 449, 'y': 278})\n",
+ "\u001b[92m15:50:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 663/7340 [24:15<244:14, 27.3 steps/min]2025-08-11 15:50:34,101 - agent.ComputerAgent - INFO - Computer: click({'x': 573, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 573, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:50:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 664/7340 [24:18<244:27, 27.3 steps/min]\u001b[92m15:50:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:50:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:38,498 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "2025-08-11 15:50:39,120 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 335})\n",
+ "2025-08-11 15:50:39,796 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 245})\n",
+ "\u001b[92m15:50:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:50:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 9%|███-------------------------------------| 665/7340 [24:22<244:37, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:41,165 - agent.ComputerAgent - INFO - Computer: click({'x': 675, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 675, 'y': 400})\n",
+ "2025-08-11 15:50:41,824 - agent.ComputerAgent - INFO - Computer: click({'x': 941, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 941, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 667/7340 [24:23<244:02, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:50:42,475 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:50:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:50:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:43,136 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 737})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 669/7340 [24:24<243:26, 27.4 steps/min]2025-08-11 15:50:43,796 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:50:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "2025-08-11 15:50:44,464 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:50:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:45,159 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:50:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 670/7340 [24:26<243:23, 27.4 steps/min]2025-08-11 15:50:45,846 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:50:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 670/7340 [24:28<243:43, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 670/7340 [24:29<243:53, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:49,049 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:50:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:51,042 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 689/7340 [24:32<236:56, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5260055-0cc8-4e64-8e8d-a8fcf1e7df5c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:50:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:52,337 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:53,061 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 528, 'y': 509})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 528, 'y': 509})\n",
+ "2025-08-11 15:50:53,684 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:50:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 690/7340 [24:35<237:00, 28.1 steps/min]2025-08-11 15:50:54,338 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:50:55,030 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:50:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:50:56,344 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://settings/manageProfile'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://settings/manageProfile'})\n",
+ " 9%|███-------------------------------------| 691/7340 [24:38<237:02, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:50:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:50:57,649 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:50:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 692/7340 [24:39<236:52, 28.1 steps/min]2025-08-11 15:50:58,338 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:50:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "\u001b[92m15:50:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]37:09, 28.0 steps/min]2025-08-11 15:51:00,198 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:51:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 692/7340 [24:43<237:28, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 15:51:03,398 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:51:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 692/7340 [24:45<237:47, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:04,268 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:51:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "\u001b[92m15:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 9%|███-------------------------------------| 692/7340 [24:46<238:03, 27.9 steps/min]2025-08-11 15:51:05,699 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 9%|███-------------------------------------| 692/7340 [24:48<238:15, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:51:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:07,472 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 278})\n",
+ " 9%|███-------------------------------------| 692/7340 [24:49<238:26, 27.9 steps/min]\u001b[92m15:51:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:08,120 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 428})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:51:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:10,107 - agent.ComputerAgent - INFO - Computer: click({'x': 739, 'y': 728})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 739, 'y': 728})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:12,130 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 9%|███-------------------------------------| 693/7340 [24:53<238:48, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:51:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:13,429 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "2025-08-11 15:51:14,053 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 611, 'x': 537, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 611, 'x': 537, 'y': 406})\n",
+ "2025-08-11 15:51:14,714 - agent.ComputerAgent - INFO - Computer: click({'x': 667, 'y': 474})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 667, 'y': 474})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:16,724 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 9%|███-------------------------------------| 697/7340 [24:58<238:01, 27.9 steps/min]2025-08-11 15:51:17,381 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:51:17,382 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 760})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:18,079 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:18,748 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 10%|███-------------------------------------| 698/7340 [25:00<237:58, 27.9 steps/min]\u001b[92m15:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:19,807 - agent.ComputerAgent - INFO - Computer: click({'x': 938, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 938, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:51:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 699/7340 [25:03<238:04, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:23,089 - agent.ComputerAgent - INFO - Computer: type({'text': '180000'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '180000'})\n",
+ "\u001b[92m15:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 700/7340 [25:04<237:54, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:23,755 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 741})\n",
+ "2025-08-11 15:51:24,433 - agent.ComputerAgent - INFO - Computer: click({'x': 396, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 396, 'y': 174})\n",
+ "\u001b[92m15:51:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 701/7340 [25:06<237:44, 27.9 steps/min]2025-08-11 15:51:25,114 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 168})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 703/7340 [25:07<237:12, 28.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:51:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:26,945 - agent.ComputerAgent - INFO - Computer: click({'x': 394, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 394, 'y': 426})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 704/7340 [25:08<237:00, 28.0 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:51:27,628 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:51:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 705/7340 [25:10<236:52, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:51:29,851 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 706/7340 [25:13<237:00, 28.0 steps/min]\u001b[92m15:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:32,253 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 18, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 18, 'y': 629})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:51:33,582 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 706/7340 [25:15<237:18, 28.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:34,225 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:34,896 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:35,575 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:51:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 707/7340 [25:17<237:15, 28.0 steps/min]2025-08-11 15:51:36,280 - agent.ComputerAgent - INFO - Computer: click({'x': 702, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 702, 'y': 402})\n",
+ "2025-08-11 15:51:36,917 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:51:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:37,576 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:51:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:38,256 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:51:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:38,938 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:51:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 707/7340 [25:20<237:47, 27.9 steps/min]2025-08-11 15:51:39,612 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:40,258 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:40,938 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 708/7340 [25:23<237:49, 27.9 steps/min]2025-08-11 15:51:42,263 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:51:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:42,908 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:51:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 708/7340 [25:24<238:01, 27.9 steps/min]2025-08-11 15:51:43,567 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:44,213 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:51:44,215 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 745})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 745})\n",
+ " 10%|███-------------------------------------| 708/7340 [25:25<238:13, 27.8 steps/min]2025-08-11 15:51:44,906 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 709/7340 [25:28<238:19, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 10%|███-------------------------------------| 710/7340 [25:29<238:06, 27.8 steps/min]2025-08-11 15:51:48,776 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:51:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:51:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:49,430 - agent.ComputerAgent - INFO - Computer: click({'x': 451, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 451, 'y': 213})\n",
+ " 10%|███-------------------------------------| 710/7340 [25:31<238:17, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:51:50,083 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:51:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:51:50,748 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:51:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 711/7340 [25:32<238:08, 27.8 steps/min]2025-08-11 15:51:51,800 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 711/7340 [25:33<238:18, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 711/7340 [25:34<238:29, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:51:54,335 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 628, 'x': 509, 'y': 405})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 628, 'x': 509, 'y': 405})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:51:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 713/7340 [25:36<237:57, 27.8 steps/min]2025-08-11 15:51:54,980 - agent.ComputerAgent - INFO - Computer: click({'x': 884, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 884, 'y': 167})\n",
+ " 10%|███-------------------------------------| 713/7340 [25:37<238:06, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:51:56,136 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:51:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:51:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 714/7340 [25:38<237:57, 27.8 steps/min]2025-08-11 15:51:57,446 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:51:58,090 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:51:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 714/7340 [25:39<238:10, 27.8 steps/min]\u001b[92m15:51:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:51:59,138 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 10%|███-------------------------------------| 715/7340 [25:45<238:37, 27.8 steps/min]\u001b[92m15:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:04,738 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:05,404 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 667, 'x': 173, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 667, 'x': 173, 'y': 53})\n",
+ "2025-08-11 15:52:06,044 - agent.ComputerAgent - INFO - Computer: click({'x': 376, 'y': 623})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 376, 'y': 623})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:52:09,154 - agent.ComputerAgent - INFO - Computer: type({'text': 'Lecture 1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Lecture 1'})\n",
+ " 10%|███-------------------------------------| 718/7340 [25:50<238:23, 27.8 steps/min]\u001b[92m15:52:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:52:09,799 - agent.ComputerAgent - INFO - Computer: click({'x': 59, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 59, 'y': 179})\n",
+ "2025-08-11 15:52:10,454 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 432})\n",
+ "2025-08-11 15:52:11,125 - agent.ComputerAgent - INFO - Computer: click({'x': 679, 'y': 625})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 679, 'y': 625})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:12,472 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 15:52:13,118 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:52:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:13,775 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:52:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:14,455 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "\u001b[92m15:52:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:52:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 720/7340 [25:56<238:28, 27.8 steps/min]2025-08-11 15:52:15,107 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:52:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:15,798 - agent.ComputerAgent - INFO - Computer: click({'x': 600, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 600, 'y': 128})\n",
+ "2025-08-11 15:52:16,482 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 102, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 102, 'y': 643})\n",
+ " 10%|███-------------------------------------| 726/7340 [25:59<236:45, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:18,146 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:18,815 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:52:19,454 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:52:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7e161dff-4ce2-4173-944c-04820b713773/reset \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 726/7340 [26:01<237:03, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 726/7340 [26:02<237:13, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:21,941 - agent.ComputerAgent - INFO - Agent: I updated your VS Code settings to stop focusing the editor when the debugger breaks.\n",
+ "\n",
+ "What I changed:\n",
+ "- Disabled setting: Debug: Focus Editor On Break (debug.focusEditorOnBreak)\n",
+ "\n",
+ "Effect:\n",
+ "- When debugging hits a breakpoint or pauses, VS Code will no longer steal focus back to the editor. If your cursor is in the Debug Console, it will remain there.\n",
+ "\n",
+ "Tip:\n",
+ "- You can always focus the Debug Console quickly with Ctrl+Shift+Y (Cmd+Shift+Y on macOS).\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I updated your VS Code settings to stop focusing the editor when the debugger breaks.\n",
+ "\n",
+ "What I changed:\n",
+ "- Disabled setting: Debug: Focus Editor On Break (debug.focusEditorOnBreak)\n",
+ "\n",
+ "Effect:\n",
+ "- When debugging hits a breakpoint or pauses, VS Code will no longer steal focus back to the editor. If your cursor is in the Debug Console, it will remain there.\n",
+ "\n",
+ "Tip:\n",
+ "- You can always focus the Debug Console quickly with Ctrl+Shift+Y (Cmd+Shift+Y on macOS).\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 15:52:22,585 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 880\n",
+ " - prompt_tokens: 10821\n",
+ " - total_tokens: 11701\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0194\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 880\n",
+ " - prompt_tokens: 10821\n",
+ " - total_tokens: 11701\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0194\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 728/7340 [26:04<236:48, 27.9 steps/min]2025-08-11 15:52:23,254 - agent.ComputerAgent - INFO - Computer: click({'x': 734, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 734, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:24,561 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 730/7340 [26:06<236:28, 28.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:25,846 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:26,497 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:27,165 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 731/7340 [26:17<237:40, 27.8 steps/min]2025-08-11 15:52:36,005 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:52:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:37,023 - agent.ComputerAgent - INFO - Computer: click({'x': 452, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 452, 'y': 213})\n",
+ "2025-08-11 15:52:37,665 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:52:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|███-------------------------------------| 731/7340 [26:19<238:00, 27.8 steps/min]2025-08-11 15:52:38,309 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:38,990 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:52:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 732/7340 [26:20<237:50, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:40,025 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:40,667 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:41,346 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:52:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:42,028 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:52:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 10%|███-------------------------------------| 732/7340 [26:24<238:22, 27.7 steps/min]\u001b[92m15:52:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:52:43,313 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:52:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:52:44,639 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ "\u001b[92m15:52:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|███-------------------------------------| 732/7340 [26:26<238:40, 27.7 steps/min]2025-08-11 15:52:45,306 - agent.ComputerAgent - INFO - Computer: double_click({'x': 410, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 410, 'y': 271})\n",
+ "2025-08-11 15:52:45,993 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 10%|████------------------------------------| 734/7340 [26:27<238:11, 27.7 steps/min]2025-08-11 15:52:46,636 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:47,670 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 735/7340 [26:29<238:03, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57073384-1b39-4b3a-ab9b-997779ed7ba1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 741/7340 [26:31<236:09, 27.9 steps/min]2025-08-11 15:52:50,012 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:52:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 741/7340 [26:32<236:21, 27.9 steps/min]2025-08-11 15:52:51,395 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 741/7340 [26:33<236:30, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 741/7340 [26:34<236:42, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:52:53,785 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:52:54,429 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:52:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 742/7340 [26:36<236:34, 27.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:52:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.67s/it]7.9 steps/min]2025-08-11 15:52:56,465 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:52:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:57,147 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:52:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 743/7340 [26:38<236:36, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 10%|████------------------------------------| 744/7340 [26:39<236:24, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:58,606 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:52:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:52:59,520 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m15:52:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 744/7340 [26:41<236:36, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|████------------------------------------| 744/7340 [26:42<236:46, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 10%|████------------------------------------| 745/7340 [26:43<236:35, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:03,160 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:53:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:53:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 747/7340 [26:46<236:16, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:05,120 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 385})\n",
+ "2025-08-11 15:53:05,765 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 290})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 290})\n",
+ "\u001b[92m15:53:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:53:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:06,411 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:53:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:07,720 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:53:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:08,408 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:53:08,409 - agent.ComputerAgent - INFO - Computer: click({'x': 92, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 92, 'y': 249})\n",
+ "2025-08-11 15:53:09,063 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 433})\n",
+ "\u001b[92m15:53:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 10%|████------------------------------------| 747/7340 [26:51<237:03, 27.8 steps/min]\u001b[92m15:53:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:10,420 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 709, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 709, 'y': 247})\n",
+ "2025-08-11 15:53:11,053 - agent.ComputerAgent - INFO - Computer: click({'x': 346, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 346, 'y': 532})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:53:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:12,350 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 286})\n",
+ "\u001b[92m15:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|████------------------------------------| 751/7340 [26:54<236:01, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:12,987 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:53:12,988 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 477})\n",
+ "\u001b[92m15:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:13,646 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 250})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 10%|████------------------------------------| 755/7340 [26:55<234:49, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|████------------------------------------| 757/7340 [26:56<234:17, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:15,452 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:53:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:16,818 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:17,468 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ " 10%|████------------------------------------| 757/7340 [26:59<234:40, 28.1 steps/min]\u001b[92m15:53:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:18,777 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:20,076 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 168})\n",
+ "\u001b[92m15:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:21,417 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thomas'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thomas'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 757/7340 [27:03<235:15, 28.0 steps/min]2025-08-11 15:53:22,037 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:53:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:22,707 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 113, 'y': 180}, {'x': 95, 'y': 418}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 113, 'y': 180}, {'x': 95, 'y': 418}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:53:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 10%|████------------------------------------| 762/7340 [27:04<233:44, 28.1 steps/min]2025-08-11 15:53:23,723 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 318})\n",
+ "2025-08-11 15:53:24,426 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:53:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:25,082 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:53:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:26,764 - agent.ComputerAgent - INFO - Agent: Spotify has been installed successfully on your system (via Snap, stable channel). I’ve launched it and pinned it to your dock for easy access. You can log in from the open window or find it any time from the dock or the applications menu.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Spotify has been installed successfully on your system (via Snap, stable channel). I’ve launched it and pinned it to your dock for easy access. You can log in from the open window or find it any time from the dock or the applications menu.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 15:53:27,417 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 446\n",
+ " - prompt_tokens: 12103\n",
+ " - total_tokens: 12549\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0144\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 446\n",
+ " - prompt_tokens: 12103\n",
+ " - total_tokens: 12549\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0144\n",
+ " 10%|████------------------------------------| 764/7340 [27:09<233:42, 28.1 steps/min]2025-08-11 15:53:28,068 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:28,729 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:29,784 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:53:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 765/7340 [27:11<233:43, 28.1 steps/min]2025-08-11 15:53:30,456 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:31,116 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 765/7340 [27:12<233:54, 28.1 steps/min]2025-08-11 15:53:32,162 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:53:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 765/7340 [27:13<234:03, 28.1 steps/min]2025-08-11 15:53:32,841 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:53:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:33,513 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:53:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:34,158 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:53:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 765/7340 [27:16<234:26, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:35,541 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:36,229 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 91})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 10%|████------------------------------------| 765/7340 [27:17<234:37, 28.0 steps/min]2025-08-11 15:53:36,850 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:53:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:53:37,527 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:53:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 10%|████------------------------------------| 767/7340 [27:19<234:08, 28.1 steps/min]2025-08-11 15:53:38,206 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:53:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:53:39,230 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:53:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 10%|████------------------------------------| 768/7340 [27:21<234:03, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:39,910 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:53:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 11%|████------------------------------------| 773/7340 [27:22<232:32, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:53:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:53:41,762 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 292})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 292})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 773/7340 [27:23<232:42, 28.2 steps/min]2025-08-11 15:53:42,462 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:53:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 774/7340 [27:24<232:34, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:44,496 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 101})\n",
+ "\u001b[92m15:53:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/46b96f5a-b99e-443a-93b9-50a22c4b7fb4/close \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 775/7340 [27:26<232:25, 28.2 steps/min]2025-08-11 15:53:45,150 - agent.ComputerAgent - INFO - Computer: click({'x': 679, 'y': 224})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 679, 'y': 224})\n",
+ "2025-08-11 15:53:45,797 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:53:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:47,801 - agent.ComputerAgent - INFO - Agent: I have changed VLC’s recordings folder to Desktop via Tools > Preferences > Input / Codecs > Record directory or filename and saved the setting.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I have changed VLC’s recordings folder to Desktop via Tools > Preferences > Input / Codecs > Record directory or filename and saved the setting.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 15:53:48,419 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 232\n",
+ " - prompt_tokens: 8729\n",
+ " - total_tokens: 8961\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0132\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 232\n",
+ " - prompt_tokens: 8729\n",
+ " - total_tokens: 8961\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0132\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 777/7340 [27:30<232:18, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 779/7340 [27:31<231:49, 28.3 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:53:50,411 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:53:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:53:51,060 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:53:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 779/7340 [27:33<232:09, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 15:53:53,924 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 780/7340 [27:35<232:04, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]\u001b[92m15:53:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 15:53:56,576 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:53:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:57,947 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 11%|████------------------------------------| 790/7340 [27:39<229:20, 28.6 steps/min]\u001b[92m15:53:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:53:59,304 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:01,294 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:01,955 - agent.ComputerAgent - INFO - Computer: double_click({'x': 351, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 351, 'y': 111})\n",
+ "\u001b[92m15:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 790/7340 [27:43<229:53, 28.5 steps/min]\u001b[92m15:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:02,594 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:54:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:03,283 - agent.ComputerAgent - INFO - Computer: click({'x': 941, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 941, 'y': 203})\n",
+ "\u001b[92m15:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:03,927 - agent.ComputerAgent - INFO - Computer: double_click({'x': 333, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 333, 'y': 532})\n",
+ "\u001b[92m15:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:04,602 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:54:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:05,291 - agent.ComputerAgent - INFO - Computer: double_click({'x': 469, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 469, 'y': 207})\n",
+ " 11%|████------------------------------------| 791/7340 [27:47<230:01, 28.5 steps/min]2025-08-11 15:54:05,922 - agent.ComputerAgent - INFO - Computer: click({'x': 184, 'y': 180})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 184, 'y': 180})\n",
+ "\u001b[92m15:54:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:06,577 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 515, 'y': 283}, {'x': 448, 'y': 357}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 515, 'y': 283}, {'x': 448, 'y': 357}]})\n",
+ "2025-08-11 15:54:07,222 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 794/7340 [27:49<229:20, 28.5 steps/min]2025-08-11 15:54:07,884 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:08,565 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/69adc82e-c1c7-4aec-847e-1a5c9a2a0fc8/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14e4659c-f769-426a-90b7-e3bdaf1fa578/close \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 796/7340 [27:50<228:51, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 796/7340 [27:51<229:02, 28.6 steps/min]2025-08-11 15:54:10,587 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:54:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 11%|████------------------------------------| 797/7340 [27:52<228:51, 28.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 797/7340 [27:53<228:59, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:12,285 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:54:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 797/7340 [27:54<229:08, 28.6 steps/min]2025-08-11 15:54:13,431 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:54:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:14,087 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:54:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 797/7340 [27:55<229:18, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:54:15,263 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:54:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:16,598 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 798/7340 [27:58<229:18, 28.5 steps/min]2025-08-11 15:54:17,264 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:54:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:54:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:19,234 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:54:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:19,923 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:54:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 799/7340 [28:01<229:27, 28.5 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 11%|████------------------------------------| 799/7340 [28:02<229:35, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:21,600 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 15:54:22,253 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:54:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:22,952 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 11%|████------------------------------------| 799/7340 [28:04<229:51, 28.5 steps/min]\u001b[92m15:54:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]2025-08-11 15:54:24,529 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 800/7340 [28:07<229:52, 28.4 steps/min]\u001b[92m15:54:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 801/7340 [28:08<229:43, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 11%|████------------------------------------| 801/7340 [28:09<229:51, 28.4 steps/min]\u001b[92m15:54:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:28,439 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 119})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 119})\n",
+ "\u001b[92m15:54:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:29,089 - agent.ComputerAgent - INFO - Computer: click({'x': 550, 'y': 66})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 550, 'y': 66})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:54:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 802/7340 [28:10<229:43, 28.5 steps/min]2025-08-11 15:54:29,717 - agent.ComputerAgent - INFO - Computer: click({'x': 565, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 565, 'y': 76})\n",
+ "\u001b[92m15:54:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:30,389 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 804/7340 [28:12<229:15, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e0e6513c-f386-4bcb-9e3c-82d82c7b14ff/close \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 806/7340 [28:13<228:46, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:32,189 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 806/7340 [28:14<228:57, 28.5 steps/min]2025-08-11 15:54:33,462 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:54:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 806/7340 [28:15<229:06, 28.5 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 11%|████------------------------------------| 806/7340 [28:16<229:14, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]2025-08-11 15:54:37,384 - agent.ComputerAgent - INFO - Computer: type({'text': '180000'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]INFO:agent.ComputerAgent:Computer: type({'text': '180000'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 806/7340 [28:19<229:34, 28.5 steps/min]2025-08-11 15:54:38,008 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:54:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]8.5 steps/min]2025-08-11 15:54:39,332 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:54:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 15:54:40,013 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:41,327 - agent.ComputerAgent - INFO - Computer: type({'text': '=LEFT(A2;FIND(\" \";A2)-1)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=LEFT(A2;FIND(\" \";A2)-1)'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:42,033 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:44,005 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 808/7340 [28:26<229:54, 28.4 steps/min]\u001b[92m15:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:54:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:45,963 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:54:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:54:46,673 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 294})\n",
+ "2025-08-11 15:54:47,327 - agent.ComputerAgent - INFO - Computer: click({'x': 897, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 897, 'y': 167})\n",
+ " 11%|████------------------------------------| 809/7340 [28:29<229:57, 28.4 steps/min]\u001b[92m15:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:54:47,972 - agent.ComputerAgent - INFO - Computer: click({'x': 253, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 253, 'y': 271})\n",
+ "2025-08-11 15:54:48,607 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 606, 'scroll_x': 0, 'x': 526, 'y': 377})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 606, 'scroll_x': 0, 'x': 526, 'y': 377})\n",
+ "\u001b[92m15:54:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 811/7340 [28:30<229:29, 28.5 steps/min]2025-08-11 15:54:49,240 - agent.ComputerAgent - INFO - Computer: click({'x': 362, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 362, 'y': 169})\n",
+ " 11%|████------------------------------------| 813/7340 [28:31<228:59, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:50,926 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:54:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 814/7340 [28:32<228:50, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 814/7340 [28:33<229:00, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:54:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:53,283 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 287})\n",
+ " 11%|████------------------------------------| 814/7340 [28:35<229:09, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:54,555 - agent.ComputerAgent - INFO - Computer: type({'text': 'vlcsnap'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'vlcsnap'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 11%|████------------------------------------| 816/7340 [28:36<228:42, 28.5 steps/min]2025-08-11 15:54:55,230 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:55,926 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 817/7340 [28:37<228:34, 28.5 steps/min]2025-08-11 15:54:56,543 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:54:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:54:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:54:57,933 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:54:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:54:59,230 - agent.ComputerAgent - INFO - Computer: type({'text': 'Boston Logan International Airport'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Boston Logan International Airport'})\n",
+ " 11%|████------------------------------------| 817/7340 [28:40<229:00, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:54:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:54:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:55:00,561 - agent.ComputerAgent - INFO - Computer: click({'x': 354, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 354, 'y': 306})\n",
+ "2025-08-11 15:55:01,184 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:55:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:55:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 818/7340 [28:42<228:57, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:01,839 - agent.ComputerAgent - INFO - Computer: click({'x': 602, 'y': 560})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 602, 'y': 560})\n",
+ "2025-08-11 15:55:02,523 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 819/7340 [28:44<228:53, 28.5 steps/min]\u001b[92m15:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:55:03,846 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:04,511 - agent.ComputerAgent - INFO - Computer: click({'x': 452, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 452, 'y': 214})\n",
+ " 11%|████------------------------------------| 820/7340 [28:46<228:45, 28.5 steps/min]2025-08-11 15:55:05,193 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:55:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:05,893 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:55:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 822/7340 [28:47<228:19, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:07,070 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:55:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 822/7340 [28:48<228:28, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:08,273 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:55:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 822/7340 [28:50<228:38, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 822/7340 [28:51<228:46, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:10,141 - agent.ComputerAgent - INFO - Computer: click({'x': 251, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 251, 'y': 64})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:10,816 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:55:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 822/7340 [28:52<228:58, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:11,507 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:55:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:12,179 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:55:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:12,815 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:55:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 823/7340 [28:54<228:55, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3d8c0128-5626-42cb-a568-6193f150db3d/reset \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 824/7340 [28:55<228:44, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:14,469 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:55:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:15,135 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:55:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 824/7340 [28:56<228:54, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:15,788 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:55:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 824/7340 [28:59<229:18, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:55:19,523 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:55:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 825/7340 [29:01<229:11, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:20,166 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:55:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:22,570 - agent.ComputerAgent - INFO - Computer: type({'text': 'Square'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Square'})\n",
+ " 11%|████------------------------------------| 825/7340 [29:04<229:34, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:25,183 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:55:25,184 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:55:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 827/7340 [29:08<229:33, 28.4 steps/min]2025-08-11 15:55:27,827 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 291})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:55:29,185 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:30,553 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 15:55:31,214 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:55:31,216 - agent.ComputerAgent - INFO - Computer: move({'x': 19, 'y': 419})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 19, 'y': 419})\n",
+ "2025-08-11 15:55:31,905 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 155})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 11%|████------------------------------------| 828/7340 [29:15<230:07, 28.3 steps/min]\u001b[92m15:55:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:34,535 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:55:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:35,207 - agent.ComputerAgent - INFO - Computer: double_click({'x': 422, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 422, 'y': 271})\n",
+ "2025-08-11 15:55:35,882 - agent.ComputerAgent - INFO - Computer: click({'x': 946, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 946, 'y': 750})\n",
+ "\u001b[92m15:55:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:55:36,541 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:55:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:55:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:55:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 11%|████------------------------------------| 831/7340 [29:19<229:42, 28.3 steps/min]\u001b[92m15:55:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:38,583 - agent.ComputerAgent - INFO - Computer: click({'x': 385, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 385, 'y': 249})\n",
+ "\u001b[92m15:55:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:55:39,223 - agent.ComputerAgent - INFO - Computer: click({'x': 720, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 720, 'y': 245})\n",
+ "2025-08-11 15:55:39,865 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -637, 'scroll_x': 0, 'x': 400, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -637, 'scroll_x': 0, 'x': 400, 'y': 532})\n",
+ "2025-08-11 15:55:40,559 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 95, 'y': 178}, {'x': 95, 'y': 419}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 95, 'y': 178}, {'x': 95, 'y': 419}]})\n",
+ "\u001b[92m15:55:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:55:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:41,898 - agent.ComputerAgent - INFO - Computer: type({'text': 'vlc change volume slider color black theme'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'vlc change volume slider color black theme'})\n",
+ " 11%|████------------------------------------| 833/7340 [29:23<229:36, 28.3 steps/min]2025-08-11 15:55:42,536 - agent.ComputerAgent - INFO - Computer: double_click({'x': 452, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 452, 'y': 214})\n",
+ "\u001b[92m15:55:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 11%|████------------------------------------| 838/7340 [29:24<228:11, 28.5 steps/min]\u001b[92m15:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:55:43,688 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 517, 'y': 284}, {'x': 442, 'y': 357}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 517, 'y': 284}, {'x': 442, 'y': 357}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:55:44,336 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:55:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 839/7340 [29:26<228:04, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:46,169 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ " 11%|████------------------------------------| 841/7340 [29:28<227:49, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:48,358 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:55:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 841/7340 [29:30<227:58, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:49,028 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:55:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:50,086 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:55:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:50,746 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:55:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 841/7340 [29:32<228:17, 28.5 steps/min]2025-08-11 15:55:51,415 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:55:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:55:53,152 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "2025-08-11 15:55:53,802 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:55:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 841/7340 [29:35<228:41, 28.4 steps/min]2025-08-11 15:55:54,489 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:55:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:55:55,165 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:55:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 11%|████------------------------------------| 842/7340 [29:36<228:33, 28.4 steps/min]2025-08-11 15:55:55,825 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:55:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "2025-08-11 15:55:56,520 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:55:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 11%|████------------------------------------| 842/7340 [29:38<228:43, 28.4 steps/min]2025-08-11 15:55:57,568 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:55:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 845/7340 [29:39<227:56, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:55:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed975a0b-4ad0-48a8-a0c7-17ac0bcc21c8/close \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 845/7340 [29:40<228:05, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:55:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:00,018 - agent.ComputerAgent - INFO - Computer: click({'x': 312, 'y': 293})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 312, 'y': 293})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 845/7340 [29:42<228:20, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:01,905 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:56:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dafb73ba-e3ed-45a0-b9fc-6565b2800585/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 847/7340 [29:43<227:53, 28.5 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:56:03,248 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:56:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 847/7340 [29:44<228:03, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 848/7340 [29:46<227:53, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cff7dd60-8e0b-4dff-ad9a-e8e48cb0fd9b/close \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 848/7340 [29:47<228:00, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 15:56:06,317 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 12%|████------------------------------------| 848/7340 [29:48<228:08, 28.5 steps/min]\u001b[92m15:56:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]8.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 15:56:09,148 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c8c54705-3689-4d05-b8e1-7a57903f3a21/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 12%|████------------------------------------| 848/7340 [29:51<228:35, 28.4 steps/min]\u001b[92m15:56:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:10,480 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:56:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m15:56:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 848/7340 [29:53<228:51, 28.4 steps/min]\u001b[92m15:56:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]2025-08-11 15:56:12,745 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 623, 'scroll_x': 0, 'x': 526, 'y': 376})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 623, 'scroll_x': 0, 'x': 526, 'y': 376})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 848/7340 [29:55<229:03, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "\u001b[92m15:56:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:15,865 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 849/7340 [29:58<229:08, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:02, 2.27s/it]\u001b[92m15:56:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.72s/it]8.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:19,720 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:21,255 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 849/7340 [30:02<229:44, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:22,188 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\u001b[92m15:56:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00, 2.03s/it]\n",
+ "2025-08-11 15:56:23,638 - agent.ComputerAgent - INFO - Computer: type({'text': 'maximum volume'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'maximum volume'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:24,347 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:56:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:56:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 861/7340 [30:06<226:31, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:25,027 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 209})\n",
+ "\u001b[92m15:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:25,696 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 20, 'y': 92})\n",
+ "\u001b[92m15:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 862/7340 [30:07<226:22, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:26,380 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:56:26,380 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 710})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 710})\n",
+ "\u001b[92m15:56:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:56:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:27,055 - agent.ComputerAgent - INFO - Computer: click({'x': 583, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 583, 'y': 174})\n",
+ "2025-08-11 15:56:27,699 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 752})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 752})\n",
+ "\u001b[92m15:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1222aa7-1f5e-490a-a2aa-8fc134f6b36d/close \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 864/7340 [30:09<226:02, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:28,317 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 411})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 411})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 867/7340 [30:11<225:24, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 15:56:30,951 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]8.7 steps/min]2025-08-11 15:56:32,246 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:56:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 868/7340 [30:14<225:25, 28.7 steps/min]2025-08-11 15:56:32,946 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.71s/it]\u001b[92m15:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]8.7 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:36,494 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:56:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 868/7340 [30:18<225:57, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:37,352 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:56:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:38,001 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 868/7340 [30:19<226:09, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:38,649 - agent.ComputerAgent - INFO - Computer: click({'x': 359, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 359, 'y': 306})\n",
+ "\u001b[92m15:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:39,318 - agent.ComputerAgent - INFO - Computer: click({'x': 537, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 537, 'y': 33})\n",
+ " 12%|████------------------------------------| 868/7340 [30:21<226:18, 28.6 steps/min]2025-08-11 15:56:39,966 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:56:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:56:40,650 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:56:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 870/7340 [30:22<225:53, 28.6 steps/min]2025-08-11 15:56:41,317 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:56:41,968 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 870/7340 [30:23<226:02, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:43,344 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 616, 'scroll_x': 0})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 616, 'scroll_x': 0})\n",
+ " 12%|████------------------------------------| 871/7340 [30:26<226:02, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:56:45,007 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m15:56:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 871/7340 [30:27<226:12, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:56:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:56:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 12%|████------------------------------------| 871/7340 [30:28<226:20, 28.6 steps/min]\u001b[92m15:56:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:47,723 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 189, 'y': 268}, {'x': 393, 'y': 321}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 189, 'y': 268}, {'x': 393, 'y': 321}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 871/7340 [30:29<226:27, 28.6 steps/min]2025-08-11 15:56:48,401 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:56:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:56:49,036 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:56:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 873/7340 [30:30<226:02, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 12%|████------------------------------------| 873/7340 [30:31<226:10, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:56:50,757 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m15:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:51,435 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 488, 'y': 160})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 488, 'y': 160})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 874/7340 [30:35<226:17, 28.6 steps/min]\u001b[92m15:56:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:56:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:56:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:54,831 - agent.ComputerAgent - INFO - Computer: click({'x': 729, 'y': 302})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 729, 'y': 302})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:56:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:56:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 874/7340 [30:37<226:36, 28.5 steps/min]2025-08-11 15:56:57,140 - agent.ComputerAgent - INFO - Computer: click({'x': 89, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 89, 'y': 134})\n",
+ "\u001b[92m15:56:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:56:57,813 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 89})\n",
+ " 12%|████------------------------------------| 876/7340 [30:39<226:13, 28.6 steps/min]\u001b[92m15:56:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:56:58,491 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 203})\n",
+ "2025-08-11 15:56:59,140 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:56:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 878/7340 [30:40<225:48, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:57:00,337 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m15:57:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 879/7340 [30:42<225:40, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:00,996 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:57:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 12%|████------------------------------------| 879/7340 [30:43<225:50, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:57:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:02,842 - agent.ComputerAgent - INFO - Computer: click({'x': 365, 'y': 242})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 365, 'y': 242})\n",
+ " 12%|████------------------------------------| 879/7340 [30:44<225:58, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 12%|████------------------------------------| 881/7340 [30:45<225:30, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:04,505 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m15:57:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:05,211 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:57:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:06,573 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 12%|████------------------------------------| 881/7340 [30:48<225:50, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:57:07,198 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:07,865 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 883/7340 [30:49<225:26, 28.6 steps/min]2025-08-11 15:57:08,548 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:57:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 883/7340 [30:50<225:33, 28.6 steps/min]2025-08-11 15:57:09,729 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:10,398 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:57:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 883/7340 [30:52<225:43, 28.6 steps/min]2025-08-11 15:57:11,428 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m15:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:12,090 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:57:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 883/7340 [30:54<226:01, 28.6 steps/min]\u001b[92m15:57:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:57:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:14,012 - agent.ComputerAgent - INFO - Computer: double_click({'x': 197, 'y': 536})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 197, 'y': 536})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 883/7340 [30:55<226:10, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 12%|████------------------------------------| 884/7340 [30:56<226:00, 28.6 steps/min]\u001b[92m15:57:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:16,208 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 51})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 885/7340 [30:58<225:56, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:57:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:18,001 - agent.ComputerAgent - INFO - Computer: double_click({'x': 514, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 514, 'y': 345})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 887/7340 [31:00<225:34, 28.6 steps/min]\u001b[92m15:57:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a23ddde7-5509-407d-af64-ea09807c1af1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:57:19,341 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m15:57:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:19,996 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:21,349 - agent.ComputerAgent - INFO - Agent: Firefox has been removed from Favorites. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Firefox has been removed from Favorites. Task completed.\n",
+ "2025-08-11 15:57:21,987 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 211\n",
+ " - prompt_tokens: 3974\n",
+ " - total_tokens: 4185\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0042\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 211\n",
+ " - prompt_tokens: 3974\n",
+ " - total_tokens: 4185\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0042\n",
+ " 12%|████------------------------------------| 889/7340 [31:03<225:24, 28.6 steps/min]2025-08-11 15:57:22,648 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 332})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:57:23,320 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:57:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 890/7340 [31:05<225:21, 28.6 steps/min]\u001b[92m15:57:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:57:24,605 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m15:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 15:57:25,262 - agent.ComputerAgent - INFO - Computer: double_click({'x': 321, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 321, 'y': 305})\n",
+ "2025-08-11 15:57:25,912 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:57:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 892/7340 [31:07<225:00, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:27,745 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 12%|████------------------------------------| 894/7340 [31:10<224:44, 28.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:57:29,081 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:57:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:29,730 - agent.ComputerAgent - INFO - Computer: click({'x': 614, 'y': 349})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 614, 'y': 349})\n",
+ " 12%|████------------------------------------| 911/7340 [31:11<220:06, 29.2 steps/min]2025-08-11 15:57:30,374 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:57:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 912/7340 [31:12<219:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91c7817e-0323-4f6e-9c04-8286d6e368bc/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a8f1594-3659-4132-9059-6fa366033df0/close \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 912/7340 [31:13<220:06, 29.2 steps/min]2025-08-11 15:57:32,683 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m15:57:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:33,339 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 12%|████------------------------------------| 913/7340 [31:15<220:04, 29.2 steps/min]2025-08-11 15:57:35,046 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m15:57:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:37,100 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:57:37,101 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 12%|████------------------------------------| 913/7340 [31:18<220:25, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m15:57:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:38,412 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:57:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]9.2 steps/min]2025-08-11 15:57:39,329 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:57:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 12%|████------------------------------------| 915/7340 [31:21<220:09, 29.2 steps/min]2025-08-11 15:57:40,016 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:57:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:57:40,845 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:57:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 12%|████------------------------------------| 915/7340 [31:22<220:19, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:41,549 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m15:57:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.67s/it]\u001b[92m15:57:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]9.1 steps/min]\n",
+ " 12%|████------------------------------------| 915/7340 [31:25<220:37, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 12%|████------------------------------------| 915/7340 [31:26<220:44, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:57:45,920 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:57:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:57:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:57:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 12%|████------------------------------------| 917/7340 [31:27<220:22, 29.1 steps/min]2025-08-11 15:57:46,584 - agent.ComputerAgent - INFO - Computer: click({'x': 977, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 977, 'y': 36})\n",
+ "2025-08-11 15:57:47,226 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 588, 'scroll_x': 0, 'x': 697, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 588, 'scroll_x': 0, 'x': 697, 'y': 199})\n",
+ "2025-08-11 15:57:47,870 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:49,183 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:57:49,184 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:57:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:50,530 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 12%|████------------------------------------| 917/7340 [31:32<220:54, 29.1 steps/min]\u001b[92m15:57:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:57:51,175 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:57:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:57:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:57:51,863 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 115, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 115, 'y': 93})\n",
+ " 13%|█████-----------------------------------| 921/7340 [31:33<219:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:52,511 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m15:57:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 922/7340 [31:34<219:48, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:53,862 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 13%|█████-----------------------------------| 922/7340 [31:35<219:55, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:55,031 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:57:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:56,357 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 13%|█████-----------------------------------| 922/7340 [31:38<220:12, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:57,038 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:57:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:57,676 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:57:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 926/7340 [31:39<219:16, 29.3 steps/min]2025-08-11 15:57:58,349 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m15:57:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:57:59,033 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m15:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 926/7340 [31:40<219:26, 29.2 steps/min]2025-08-11 15:57:59,701 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10d6b265-637e-4165-a458-35932682a0af/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 926/7340 [31:42<219:34, 29.2 steps/min]2025-08-11 15:58:01,049 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m15:58:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:58:01,740 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:58:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 926/7340 [31:44<219:53, 29.2 steps/min]\u001b[92m15:58:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 13%|█████-----------------------------------| 929/7340 [31:46<219:18, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/df59f155-4e77-49b5-877d-dbd25c77d479/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:06,635 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m15:58:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 929/7340 [31:49<219:34, 29.2 steps/min]\u001b[92m15:58:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 929/7340 [31:50<219:41, 29.2 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.44s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 930/7340 [31:51<219:37, 29.2 steps/min]\u001b[92m15:58:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:11,130 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m15:58:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:58:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 930/7340 [31:53<219:48, 29.2 steps/min]2025-08-11 15:58:12,600 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:58:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 930/7340 [31:55<220:04, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.50s/it]\n",
+ "\u001b[92m15:58:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:15,371 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 190})\n",
+ " 13%|█████-----------------------------------| 931/7340 [31:57<219:57, 29.1 steps/min]\u001b[92m15:58:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:16,186 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 328})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 328})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:17,465 - agent.ComputerAgent - INFO - Computer: type({'text': 'Square'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Square'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:18,769 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 13%|█████-----------------------------------| 932/7340 [32:01<220:09, 29.1 steps/min]\u001b[92m15:58:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:20,139 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:58:20,140 - agent.ComputerAgent - INFO - Computer: click({'x': 161, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 161, 'y': 62})\n",
+ "2025-08-11 15:58:20,821 - agent.ComputerAgent - INFO - Computer: click({'x': 635, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 635, 'y': 278})\n",
+ "2025-08-11 15:58:21,475 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 331})\n",
+ "\u001b[92m15:58:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 13%|█████-----------------------------------| 934/7340 [32:03<219:55, 29.1 steps/min]\u001b[92m15:58:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:22,781 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 345})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:58:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:24,084 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m15:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 938/7340 [32:05<219:04, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:24,743 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:58:24,744 - agent.ComputerAgent - INFO - Computer: double_click({'x': 356, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 356, 'y': 100})\n",
+ "\u001b[92m15:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:25,438 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 620, 'scroll_x': 0, 'x': 698, 'y': 200})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 620, 'scroll_x': 0, 'x': 698, 'y': 200})\n",
+ " 13%|█████-----------------------------------| 940/7340 [32:07<218:41, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 942/7340 [32:09<218:22, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:27,933 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:28,605 - agent.ComputerAgent - INFO - Computer: double_click({'x': 420, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 420, 'y': 271})\n",
+ "\u001b[92m15:58:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 942/7340 [32:10<218:30, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:29,244 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 944/7340 [32:11<218:05, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:30,389 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:31,429 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:58:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 945/7340 [32:13<218:02, 29.3 steps/min]2025-08-11 15:58:32,050 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m15:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:58:32,711 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 945/7340 [32:14<218:11, 29.3 steps/min]2025-08-11 15:58:33,353 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m15:58:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:58:34,061 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:58:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 945/7340 [32:15<218:20, 29.3 steps/min]2025-08-11 15:58:34,750 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:58:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:35,420 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m15:58:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 13%|█████-----------------------------------| 946/7340 [32:17<218:13, 29.3 steps/min]2025-08-11 15:58:36,071 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:58:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:58:36,761 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m15:58:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:38,456 - agent.ComputerAgent - INFO - Computer: type({'text': 'Ticketek ticket delivery FAQ'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Ticketek ticket delivery FAQ'})\n",
+ " 13%|█████-----------------------------------| 946/7340 [32:20<218:33, 29.3 steps/min]2025-08-11 15:58:39,140 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:58:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 947/7340 [32:21<218:24, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:40,831 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m15:58:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 947/7340 [32:22<218:33, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 947/7340 [32:23<218:40, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ac642ef8-5deb-4044-877a-f9b827d28698/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 13%|█████-----------------------------------| 948/7340 [32:24<218:31, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 948/7340 [32:25<218:38, 29.2 steps/min]2025-08-11 15:58:44,934 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:58:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/reset \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 948/7340 [32:26<218:45, 29.2 steps/min]2025-08-11 15:58:45,582 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:58:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:58:46,254 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m15:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:48,252 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 13%|█████-----------------------------------| 948/7340 [32:29<219:07, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:49,546 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 949/7340 [32:32<219:05, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:58:50,877 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 118, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 118, 'y': 91})\n",
+ "\u001b[92m15:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:58:52,172 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 15:58:52,844 - agent.ComputerAgent - INFO - Computer: click({'x': 552, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 552, 'y': 635})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 13%|█████-----------------------------------| 951/7340 [32:34<218:51, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:53,530 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 953/7340 [32:35<218:26, 29.2 steps/min]2025-08-11 15:58:54,687 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:58:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 953/7340 [32:36<218:33, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:55,366 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m15:58:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:56,047 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m15:58:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 953/7340 [32:37<218:41, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 953/7340 [32:38<218:48, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:58:58,868 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 954/7340 [32:40<218:44, 29.2 steps/min]2025-08-11 15:58:59,541 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m15:58:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:59:00,223 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m15:59:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 954/7340 [32:42<218:53, 29.2 steps/min]2025-08-11 15:59:00,882 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:59:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:01,531 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m15:59:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 954/7340 [32:43<219:02, 29.2 steps/min]2025-08-11 15:59:02,703 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:59:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 13%|█████-----------------------------------| 955/7340 [32:44<218:54, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:04,024 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:05,402 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:59:05,403 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ " 13%|█████-----------------------------------| 955/7340 [32:47<219:11, 29.1 steps/min]2025-08-11 15:59:06,058 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m15:59:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:07,147 - agent.ComputerAgent - INFO - Computer: click({'x': 51, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 51, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 956/7340 [32:49<219:13, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:59:09,143 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 957/7340 [32:51<219:09, 29.1 steps/min]\u001b[92m15:59:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:10,475 - agent.ComputerAgent - INFO - Computer: click({'x': 382, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 382, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:59:11,095 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 957/7340 [32:53<219:23, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:12,463 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 537})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 537})\n",
+ "\u001b[92m15:59:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:13,844 - agent.ComputerAgent - INFO - Computer: click({'x': 42, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 42, 'y': 93})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/79295f2f-2987-488c-b4b7-c968f71c7597/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 958/7340 [32:55<219:20, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:14,523 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:59:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:59:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:15,809 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 121, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 121, 'y': 90})\n",
+ " 13%|█████-----------------------------------| 960/7340 [32:57<219:02, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:16,476 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m15:59:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:59:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:17,176 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:59:17,177 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 263})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 263})\n",
+ " 13%|█████-----------------------------------| 961/7340 [32:58<218:55, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 962/7340 [32:59<218:46, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:59:19,403 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m15:59:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8d2bfd3-25b5-4989-a85b-9844ae7b3a8b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 962/7340 [33:01<218:59, 29.1 steps/min]\u001b[92m15:59:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m15:59:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:22,094 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 190})\n",
+ " 13%|█████-----------------------------------| 962/7340 [33:03<219:12, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m15:59:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]\u001b[92m15:59:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 963/7340 [33:06<219:11, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:59:25,733 - agent.ComputerAgent - INFO - Agent: The new layer named \"Square\" has been added in GIMP.\n",
+ "Task completed.\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:agent.ComputerAgent:Agent: The new layer named \"Square\" has been added in GIMP.\n",
+ "Task completed.\n",
+ "2025-08-11 15:59:26,389 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 10090\n",
+ " - total_tokens: 10308\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 5888\n",
+ " - response_cost: $0.0082\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 10090\n",
+ " - total_tokens: 10308\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 5888\n",
+ " - response_cost: $0.0082\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 964/7340 [33:08<219:09, 29.1 steps/min]2025-08-11 15:59:27,262 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m15:59:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]9.3 steps/min]\n",
+ "2025-08-11 15:59:27,972 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:59:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:28,679 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:59:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc46c6a9-6d89-48f2-aea5-4e33033cff5d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 13%|█████-----------------------------------| 972/7340 [33:11<217:29, 29.3 steps/min]\u001b[92m15:59:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:31,463 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:32,814 - agent.ComputerAgent - INFO - Computer: click({'x': 565, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 565, 'y': 77})\n",
+ " 13%|█████-----------------------------------| 972/7340 [33:14<217:47, 29.2 steps/min]\u001b[92m15:59:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m15:59:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 15:59:33,471 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m15:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m15:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:34,170 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 641, 'scroll_x': 0, 'x': 410, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 641, 'scroll_x': 0, 'x': 410, 'y': 244})\n",
+ "2025-08-11 15:59:34,815 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 321})\n",
+ "2025-08-11 15:59:35,449 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 15:59:35,450 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 691})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 691})\n",
+ "2025-08-11 15:59:36,124 - agent.ComputerAgent - INFO - Computer: double_click({'x': 701, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 701, 'y': 105})\n",
+ "2025-08-11 15:59:36,777 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 283})\n",
+ " 13%|█████-----------------------------------| 973/7340 [33:18<217:57, 29.2 steps/min]2025-08-11 15:59:37,452 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:59:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:38,119 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m15:59:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 978/7340 [33:19<216:49, 29.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 978/7340 [33:22<217:09, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:24<214:44, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4a80f461-093d-4b29-93aa-1fdf88fe9a1c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:25<214:50, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 15:59:44,053 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:59:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:44,691 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m15:59:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:26<214:59, 29.5 steps/min]2025-08-11 15:59:45,383 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m15:59:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:46,060 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m15:59:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:27<215:09, 29.5 steps/min]2025-08-11 15:59:46,705 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m15:59:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 15:59:47,402 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m15:59:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:29<215:16, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:32<215:36, 29.5 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]2025-08-11 15:59:52,459 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 191, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 191, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m15:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 15:59:54,588 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 13%|█████-----------------------------------| 988/7340 [33:37<216:09, 29.4 steps/min]\u001b[92m15:59:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "\u001b[92m15:59:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 13%|█████-----------------------------------| 990/7340 [33:38<215:45, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 13%|█████-----------------------------------| 990/7340 [33:39<215:51, 29.4 steps/min]\u001b[92m15:59:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:57,843 - agent.ComputerAgent - INFO - Computer: click({'x': 89, 'y': 754})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 89, 'y': 754})\n",
+ "\u001b[92m15:59:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:58,525 - agent.ComputerAgent - INFO - Computer: click({'x': 407, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 407, 'y': 64})\n",
+ " 14%|█████-----------------------------------| 991/7340 [33:40<215:43, 29.4 steps/min]\u001b[92m15:59:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 15:59:59,209 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 121, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 121, 'y': 90})\n",
+ "\u001b[92m15:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 15:59:59,853 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 65})\n",
+ " 14%|█████-----------------------------------| 992/7340 [33:41<215:36, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:00,522 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:00:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 994/7340 [33:42<215:12, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:02,177 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:00:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 994/7340 [33:43<215:21, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 994/7340 [33:45<215:29, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:00:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:04,558 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 266})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 266})\n",
+ " 14%|█████-----------------------------------| 994/7340 [33:46<215:36, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:05,182 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 995/7340 [33:47<215:27, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:06,872 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:00:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 995/7340 [33:49<215:40, 29.4 steps/min]\u001b[92m16:00:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:08,213 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:09,587 - agent.ComputerAgent - INFO - Computer: click({'x': 602, 'y': 560})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 602, 'y': 560})\n",
+ " 14%|█████-----------------------------------| 995/7340 [33:51<215:53, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 996/7340 [33:52<215:47, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:11,559 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 390})\n",
+ "\u001b[92m16:00:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:12,267 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 642, 'scroll_x': 0, 'x': 733, 'y': 579})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 642, 'scroll_x': 0, 'x': 733, 'y': 579})\n",
+ "\u001b[92m16:00:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 996/7340 [33:53<215:55, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:12,941 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ "2025-08-11 16:00:13,615 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 998/7340 [33:55<215:34, 29.4 steps/min]2025-08-11 16:00:14,257 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:00:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:00:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:16,225 - agent.ComputerAgent - INFO - Computer: type({'text': 'Google Play Movies and TV library'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Google Play Movies and TV library'})\n",
+ " 14%|█████-----------------------------------| 999/7340 [33:57<215:35, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:00:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:17,412 - agent.ComputerAgent - INFO - Computer: double_click({'x': 212, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 212, 'y': 108})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:18,706 - agent.ComputerAgent - INFO - Computer: type({'text': 'cache:help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cache:help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ " 14%|█████-----------------------------------| 1000/7340 [34:00<215:36, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:19,398 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:00:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:00:20,066 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:00:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:01<215:16, 29.4 steps/min]2025-08-11 16:00:21,235 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:00:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:03<215:22, 29.4 steps/min]2025-08-11 16:00:21,917 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:00:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:05<215:35, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:06<215:42, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:25,072 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:00:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:25,773 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:00:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:07<215:51, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:09<216:02, 29.3 steps/min]\u001b[92m16:00:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:00:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:28,789 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 525})\n",
+ " 14%|█████-----------------------------------| 1002/7340 [34:10<216:10, 29.3 steps/min]\u001b[92m16:00:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:29,443 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 100, 'y': 390})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1003/7340 [34:12<216:10, 29.3 steps/min]\u001b[92m16:00:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:31,844 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 477})\n",
+ "\u001b[92m16:00:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:33,204 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 16:00:33,859 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 266})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 266})\n",
+ " 14%|█████-----------------------------------| 1011/7340 [34:16<214:34, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ebc1d83c-0240-4fce-85fb-03afaae34955/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:37,554 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 14%|█████-----------------------------------| 1011/7340 [34:19<214:51, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1012/7340 [34:20<214:42, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.75s/it]2025-08-11 16:00:40,079 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:00:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1012/7340 [34:21<214:52, 29.4 steps/min]2025-08-11 16:00:40,773 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:00:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]\u001b[92m16:00:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1012/7340 [34:23<215:03, 29.4 steps/min]2025-08-11 16:00:42,451 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:00:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]2025-08-11 16:00:43,142 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:00:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]29.4 steps/min]\n",
+ "2025-08-11 16:00:44,258 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:00:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1012/7340 [34:26<215:18, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:45,066 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1012/7340 [34:27<215:24, 29.4 steps/min]\u001b[92m16:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:45,729 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 189})\n",
+ "\u001b[92m16:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1013/7340 [34:28<215:20, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:47,532 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 57, 'y': 190}, {'x': 50, 'y': 418}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 57, 'y': 190}, {'x': 50, 'y': 418}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:48,859 - agent.ComputerAgent - INFO - Computer: double_click({'x': 381, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 381, 'y': 277})\n",
+ " 14%|█████-----------------------------------| 1013/7340 [34:30<215:32, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:00:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1015/7340 [34:33<215:21, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:52,390 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 390})\n",
+ "\u001b[92m16:00:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:00:53,015 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 528})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 528})\n",
+ "\u001b[92m16:00:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1015/7340 [34:34<215:28, 29.4 steps/min]2025-08-11 16:00:53,709 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 622, 'scroll_x': 0, 'x': 507, 'y': 586})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 622, 'scroll_x': 0, 'x': 507, 'y': 586})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/reset \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1017/7340 [34:35<215:05, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:00:54,333 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:00:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1018/7340 [34:37<215:03, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1018/7340 [34:39<215:12, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:00:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:00:58,739 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 596, 'x': 86, 'y': 170})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 596, 'x': 86, 'y': 170})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1018/7340 [34:40<215:20, 29.4 steps/min]2025-08-11 16:00:59,412 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:00:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:00,458 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:01:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:01,128 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:01:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1019/7340 [34:43<215:24, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:03,792 - agent.ComputerAgent - INFO - Computer: click({'x': 165, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 165, 'y': 234})\n",
+ " 14%|█████-----------------------------------| 1019/7340 [34:45<215:36, 29.3 steps/min]\u001b[92m16:01:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:04,428 - agent.ComputerAgent - INFO - Computer: double_click({'x': 520, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 520, 'y': 345})\n",
+ "\u001b[92m16:01:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:05,088 - agent.ComputerAgent - INFO - Computer: click({'x': 518, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 518, 'y': 117})\n",
+ " 14%|█████-----------------------------------| 1020/7340 [34:46<215:30, 29.3 steps/min]2025-08-11 16:01:05,721 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:01:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:07,037 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:01:07,038 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 14%|█████-----------------------------------| 1022/7340 [34:48<215:12, 29.4 steps/min]2025-08-11 16:01:07,717 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:01:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:08,385 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:01:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1022/7340 [34:50<215:21, 29.3 steps/min]2025-08-11 16:01:09,046 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:01:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:09,688 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:01:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1022/7340 [34:52<215:33, 29.3 steps/min]\u001b[92m16:01:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:11,015 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:01:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:01:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:11,680 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1022/7340 [34:53<215:41, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:12,335 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:01:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:12,977 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:01:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1023/7340 [34:54<215:35, 29.3 steps/min]2025-08-11 16:01:13,645 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:01:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1023/7340 [34:56<215:43, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:14,993 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:01:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1023/7340 [34:57<215:49, 29.3 steps/min]\u001b[92m16:01:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:16,589 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:01:16,590 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1023/7340 [34:58<216:01, 29.2 steps/min]\u001b[92m16:01:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:01:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:18,386 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 384})\n",
+ " 14%|█████-----------------------------------| 1024/7340 [35:00<215:53, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1025/7340 [35:01<215:44, 29.3 steps/min]2025-08-11 16:01:20,073 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:01:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1025/7340 [35:03<215:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1025/7340 [35:04<216:05, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:23,443 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:01:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:01:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:24,137 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 614, 'x': 186, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 614, 'x': 186, 'y': 321})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1026/7340 [35:06<216:03, 29.2 steps/min]\u001b[92m16:01:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:26,128 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:01:26,129 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 628})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 628})\n",
+ "\u001b[92m16:01:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1026/7340 [35:07<216:11, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:26,774 - agent.ComputerAgent - INFO - Computer: click({'x': 451, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 451, 'y': 54})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1027/7340 [35:09<216:05, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:28,084 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:01:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:28,759 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 264})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 264})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1028/7340 [35:10<215:58, 29.2 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:01:29,427 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:01:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1029/7340 [35:12<215:53, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:01:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:31,661 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1029/7340 [35:14<216:05, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:01:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:01:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:33,680 - agent.ComputerAgent - INFO - Computer: click({'x': 321, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 321, 'y': 153})\n",
+ "\u001b[92m16:01:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1030/7340 [35:15<215:59, 29.2 steps/min]2025-08-11 16:01:34,343 - agent.ComputerAgent - INFO - Computer: click({'x': 178, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 178, 'y': 177})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:35,617 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1032/7340 [35:18<215:50, 29.2 steps/min]\u001b[92m16:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:01:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:37,637 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 537, 'y': 583})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 537, 'y': 583})\n",
+ "\u001b[92m16:01:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:38,276 - agent.ComputerAgent - INFO - Computer: click({'x': 679, 'y': 102})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 679, 'y': 102})\n",
+ " 14%|█████-----------------------------------| 1034/7340 [35:20<215:29, 29.3 steps/min]2025-08-11 16:01:38,915 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:01:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:39,561 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:01:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1036/7340 [35:21<215:07, 29.3 steps/min]2025-08-11 16:01:40,223 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:40,897 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1036/7340 [35:22<215:16, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:42,917 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1036/7340 [35:24<215:28, 29.3 steps/min]2025-08-11 16:01:43,578 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:44,240 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:01:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1036/7340 [35:27<215:44, 29.2 steps/min]\u001b[92m16:01:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:46,248 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:01:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:01:47,557 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 16:01:48,204 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:01:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:01:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:01:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1037/7340 [35:29<215:46, 29.2 steps/min]2025-08-11 16:01:48,866 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:01:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:49,508 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "2025-08-11 16:01:50,183 - agent.ComputerAgent - INFO - Computer: double_click({'x': 453, 'y': 279})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 453, 'y': 279})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 14%|█████-----------------------------------| 1038/7340 [35:32<215:43, 29.2 steps/min]2025-08-11 16:01:50,863 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:01:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:51,537 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:01:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1040/7340 [35:33<215:22, 29.3 steps/min]2025-08-11 16:01:52,214 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:01:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:01:52,904 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:01:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1040/7340 [35:34<215:31, 29.2 steps/min]2025-08-11 16:01:53,563 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:01:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1040/7340 [35:35<215:38, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1040/7340 [35:36<215:44, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:55,901 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 118, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 118, 'y': 331})\n",
+ " 14%|█████-----------------------------------| 1040/7340 [35:37<215:50, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:57,288 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:01:58,633 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1041/7340 [35:40<215:51, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:01:59,961 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1042/7340 [35:41<215:44, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:00,596 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:02:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:01,239 - agent.ComputerAgent - INFO - Computer: click({'x': 436, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 436, 'y': 215})\n",
+ " 14%|█████-----------------------------------| 1042/7340 [35:42<215:52, 29.2 steps/min]2025-08-11 16:02:01,920 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:02:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 14%|█████-----------------------------------| 1044/7340 [35:44<215:32, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:03,298 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:02:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:03,974 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 641, 'x': 86, 'y': 170})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 641, 'x': 86, 'y': 170})\n",
+ " 14%|█████-----------------------------------| 1044/7340 [35:45<215:39, 29.2 steps/min]2025-08-11 16:02:04,646 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:02:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1045/7340 [35:46<215:31, 29.2 steps/min]2025-08-11 16:02:05,316 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:02:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:05,977 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1045/7340 [35:47<215:37, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:06,655 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1045/7340 [35:49<215:50, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1045/7340 [35:51<216:02, 29.1 steps/min]\u001b[92m16:02:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:10,701 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:02:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:12,034 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:02:12,035 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:12,743 - agent.ComputerAgent - INFO - Computer: double_click({'x': 445, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 445, 'y': 270})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1046/7340 [35:54<216:03, 29.1 steps/min]2025-08-11 16:02:13,388 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 153})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:16,107 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo \"$sourceDir\"; echo \"$targetDir\"; pwd'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo \"$sourceDir\"; echo \"$targetDir\"; pwd'})\n",
+ " 14%|█████-----------------------------------| 1048/7340 [35:57<215:55, 29.1 steps/min]\u001b[92m16:02:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:16,792 - agent.ComputerAgent - INFO - Computer: click({'x': 80, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 80, 'y': 77})\n",
+ "2025-08-11 16:02:17,448 - agent.ComputerAgent - INFO - Computer: click({'x': 247, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 247, 'y': 103})\n",
+ " 14%|█████-----------------------------------| 1050/7340 [35:59<215:34, 29.2 steps/min]2025-08-11 16:02:18,106 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:02:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1052/7340 [36:00<215:11, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1052/7340 [36:01<215:18, 29.2 steps/min]\u001b[92m16:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:20,448 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 618, 'scroll_x': 0, 'x': 514, 'y': 586})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 618, 'scroll_x': 0, 'x': 514, 'y': 586})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1052/7340 [36:02<215:24, 29.2 steps/min]2025-08-11 16:02:21,087 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:02:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1053/7340 [36:03<215:15, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1053/7340 [36:04<215:21, 29.2 steps/min]2025-08-11 16:02:23,296 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:23,984 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:02:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:25,037 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:02:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1053/7340 [36:06<215:36, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 14%|█████-----------------------------------| 1053/7340 [36:10<215:56, 29.1 steps/min]\u001b[92m16:02:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:28,944 - agent.ComputerAgent - INFO - Computer: click({'x': 557, 'y': 226})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 557, 'y': 226})\n",
+ "\u001b[92m16:02:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:02:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:30,260 - agent.ComputerAgent - INFO - Computer: click({'x': 149, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 149, 'y': 404})\n",
+ "\u001b[92m16:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 14%|█████-----------------------------------| 1053/7340 [36:11<216:07, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:30,895 - agent.ComputerAgent - INFO - Computer: right_click({'x': 623, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: right_click({'x': 623, 'y': 739})\n",
+ "2025-08-11 16:02:30,896 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: right_click\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:02:31,527 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 236})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 236})\n",
+ "\u001b[92m16:02:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:33,177 - agent.ComputerAgent - INFO - Computer: click({'x': 674, 'y': 102})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 674, 'y': 102})\n",
+ " 14%|█████-----------------------------------| 1055/7340 [36:14<215:56, 29.1 steps/min]\u001b[92m16:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:33,835 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:34,494 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:35,177 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:02:35,178 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 166})\n",
+ "\u001b[92m16:02:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 14%|█████-----------------------------------| 1058/7340 [36:16<215:25, 29.2 steps/min]2025-08-11 16:02:35,863 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 110, 'y': 180}, {'x': 111, 'y': 418}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 110, 'y': 180}, {'x': 111, 'y': 418}]})\n",
+ "2025-08-11 16:02:36,488 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:02:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:38,193 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'convert -version'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'convert -version'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:39,489 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ " 14%|█████-----------------------------------| 1059/7340 [36:21<215:36, 29.1 steps/min]2025-08-11 16:02:40,176 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1062/7340 [36:22<215:00, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:02:41,374 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:02:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1062/7340 [36:23<215:05, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:42,564 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:02:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 14%|█████-----------------------------------| 1062/7340 [36:24<215:12, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:02:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:45,273 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1062/7340 [36:27<215:28, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:45,900 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:46,553 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:47,197 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:47,861 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:48,519 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:02:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 14%|█████-----------------------------------| 1064/7340 [36:30<215:23, 29.1 steps/min]\u001b[92m16:02:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:49,800 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:02:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:02:50,472 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 345})\n",
+ "2025-08-11 16:02:51,139 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 133})\n",
+ " 14%|█████-----------------------------------| 1064/7340 [36:32<215:35, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:51,791 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:02:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:52,505 - agent.ComputerAgent - INFO - Computer: click({'x': 141, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 141, 'y': 446})\n",
+ " 15%|█████-----------------------------------| 1066/7340 [36:34<215:14, 29.1 steps/min]2025-08-11 16:02:53,151 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:02:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:02:53,816 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:02:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:02:54,861 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:02:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1067/7340 [36:37<215:18, 29.1 steps/min]\u001b[92m16:02:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:02:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:56,690 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:02:56,690 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 577})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1067/7340 [36:39<215:28, 29.1 steps/min]\u001b[92m16:02:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:02:58,056 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:02:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:02:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:02:58,735 - agent.ComputerAgent - INFO - Computer: click({'x': 152, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 152, 'y': 149})\n",
+ " 15%|█████-----------------------------------| 1068/7340 [36:40<215:22, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:00,041 - agent.ComputerAgent - INFO - Computer: click({'x': 162, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 162, 'y': 62})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80cac10f-cdb8-428d-a03b-1e499f48cf49/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 15%|█████-----------------------------------| 1070/7340 [36:41<215:02, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:02,098 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:03,451 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 15%|█████-----------------------------------| 1071/7340 [36:45<215:07, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:04,738 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 16:03:05,377 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:03:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|█████-----------------------------------| 1072/7340 [36:47<215:09, 29.1 steps/min]2025-08-11 16:03:06,789 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:03:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:07,436 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:03:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:03:09,000 - agent.ComputerAgent - INFO - Computer: type({'text': 'Promotions'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Promotions'})\n",
+ " 15%|█████-----------------------------------| 1073/7340 [36:50<215:11, 29.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:09,655 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:03:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]2025-08-11 16:03:10,313 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:03:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1074/7340 [36:52<215:05, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]2025-08-11 16:03:12,535 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1074/7340 [36:55<215:23, 29.1 steps/min]\u001b[92m16:03:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 16:03:14,014 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:14,851 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 15%|█████-----------------------------------| 1076/7340 [36:56<215:04, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:15,715 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:03:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:16,406 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:03:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1076/7340 [36:58<215:13, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:03:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:17,085 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:17,743 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 135})\n",
+ "\u001b[92m16:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|█████-----------------------------------| 1076/7340 [36:59<215:20, 29.1 steps/min]2025-08-11 16:03:18,407 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 514, 'y': 587})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 514, 'y': 587})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:03:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1077/7340 [37:00<215:15, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:19,738 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:03:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:03:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:20,387 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 778, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 778, 'y': 137})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:21,662 - agent.ComputerAgent - INFO - Computer: type({'text': 'B6'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'B6'})\n",
+ " 15%|█████-----------------------------------| 1078/7340 [37:03<215:15, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:22,309 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:03:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1080/7340 [37:06<215:04, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:26,223 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1080/7340 [37:08<215:17, 29.1 steps/min]\u001b[92m16:03:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:27,544 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:28,568 - agent.ComputerAgent - INFO - Computer: click({'x': 73, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 73, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1082/7340 [37:10<214:59, 29.1 steps/min]2025-08-11 16:03:29,256 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:03:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:30,531 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:03:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|█████-----------------------------------| 1083/7340 [37:14<215:07, 29.1 steps/min]\u001b[92m16:03:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:32,887 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:03:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:34,240 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:03:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:03:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:36,222 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:03:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:37,535 - agent.ComputerAgent - INFO - Computer: type({'text': '=LEFT(A2,SEARCH(\" \",A2)-1)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=LEFT(A2,SEARCH(\" \",A2)-1)'})\n",
+ "\u001b[92m16:03:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:38,863 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 15%|█████-----------------------------------| 1084/7340 [37:20<215:31, 29.0 steps/min]\u001b[92m16:03:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:40,208 - agent.ComputerAgent - INFO - Computer: type({'text': 'printenv | grep -i dir'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'printenv | grep -i dir'})\n",
+ "2025-08-11 16:03:40,897 - agent.ComputerAgent - INFO - Computer: click({'x': 556, 'y': 451})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 556, 'y': 451})\n",
+ "\u001b[92m16:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:41,545 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 732})\n",
+ "2025-08-11 16:03:42,247 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ " 15%|█████-----------------------------------| 1086/7340 [37:23<215:22, 29.0 steps/min]\u001b[92m16:03:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:42,913 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:03:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:43,593 - agent.ComputerAgent - INFO - Computer: double_click({'x': 525, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 525, 'y': 345})\n",
+ "2025-08-11 16:03:44,251 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 160})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 160})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:45,615 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1089/7340 [37:28<215:06, 29.1 steps/min]\u001b[92m16:03:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:03:47,301 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:03:47,968 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 183})\n",
+ " 15%|█████-----------------------------------| 1092/7340 [37:29<214:31, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:48,611 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:03:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1093/7340 [37:31<214:29, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1093/7340 [37:32<214:35, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bfbe6e66-d4ef-4cdf-88f2-a26724fe1dc0/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:52,500 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:03:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1094/7340 [37:34<214:30, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:53,582 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:03:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:54,241 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:03:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:03:55,533 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1094/7340 [37:37<214:47, 29.1 steps/min]2025-08-11 16:03:56,216 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:56,890 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1095/7340 [37:38<214:42, 29.1 steps/min]2025-08-11 16:03:57,575 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:03:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:03:58,239 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:03:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:03:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|█████-----------------------------------| 1095/7340 [37:41<214:55, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:03:59,939 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:03:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:00,627 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|█████-----------------------------------| 1096/7340 [37:42<214:49, 29.1 steps/min]2025-08-11 16:04:01,320 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:04:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:04:03,670 - agent.ComputerAgent - INFO - Computer: type({'text': '132'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '132'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]29.0 steps/min]2025-08-11 16:04:04,508 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:04:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|█████-----------------------------------| 1097/7340 [37:46<214:58, 29.0 steps/min]2025-08-11 16:04:05,158 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:04:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:07,561 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 16:04:09,097 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 15%|█████-----------------------------------| 1097/7340 [37:50<215:23, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:04:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:11,658 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "\u001b[92m16:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:04:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|█████-----------------------------------| 1100/7340 [37:54<215:00, 29.0 steps/min]2025-08-11 16:04:12,939 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 574})\n",
+ "2025-08-11 16:04:13,633 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 406})\n",
+ "2025-08-11 16:04:14,297 - agent.ComputerAgent - INFO - Computer: click({'x': 493, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 493, 'y': 157})\n",
+ "\u001b[92m16:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:14,949 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:15,631 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:04:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:16,298 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 173})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:17,637 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 15%|█████-----------------------------------| 1100/7340 [37:59<215:30, 29.0 steps/min]2025-08-11 16:04:18,289 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1105/7340 [38:00<214:29, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:04:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:20,196 - agent.ComputerAgent - INFO - Computer: double_click({'x': 520, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 520, 'y': 345})\n",
+ " 15%|██████----------------------------------| 1106/7340 [38:01<214:22, 29.1 steps/min]2025-08-11 16:04:20,838 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:04:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:21,519 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:04:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|██████----------------------------------| 1107/7340 [38:04<214:22, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1107/7340 [38:05<214:27, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:04:24,215 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:04:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:24,869 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:04:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1107/7340 [38:06<214:34, 29.0 steps/min]2025-08-11 16:04:25,539 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:04:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:26,218 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:04:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:26,891 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:04:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 15%|██████----------------------------------| 1108/7340 [38:08<214:33, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1108/7340 [38:09<214:39, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:28,699 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:04:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:04:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:29,379 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 152})\n",
+ " 15%|██████----------------------------------| 1108/7340 [38:11<214:46, 29.0 steps/min]2025-08-11 16:04:30,068 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:04:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:31,440 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:04:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|██████----------------------------------| 1109/7340 [38:13<214:44, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:32,739 - agent.ComputerAgent - INFO - Agent: Found it.\n",
+ "\n",
+ "- Ticket Delivery FAQs (official): https://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs\n",
+ "- Text-only mirror (in case the site is blocked): https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Found it.\n",
+ "\n",
+ "- Ticket Delivery FAQs (official): https://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs\n",
+ "- Text-only mirror (in case the site is blocked): https://r.jina.ai/http://help.ticketek.com.au/hc/en-us/articles/360001877308-Ticket-Delivery-FAQs\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:04:33,407 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 663\n",
+ " - prompt_tokens: 11466\n",
+ " - total_tokens: 12129\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9984\n",
+ " - response_cost: $0.0097\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 663\n",
+ " - prompt_tokens: 11466\n",
+ " - total_tokens: 12129\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9984\n",
+ " - response_cost: $0.0097\n",
+ "\u001b[92m16:04:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1110/7340 [38:15<214:41, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:34,032 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:04:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:34,680 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 53})\n",
+ " 15%|██████----------------------------------| 1110/7340 [38:16<214:49, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:04:35,987 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+up'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+up'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1111/7340 [38:17<214:42, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:04:37,160 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:04:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e161dff-4ce2-4173-944c-04820b713773/close \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1115/7340 [38:18<213:54, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1116/7340 [38:20<213:48, 29.1 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:04:39,138 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:04:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1116/7340 [38:21<213:55, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]\u001b[92m16:04:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1116/7340 [38:22<214:02, 29.1 steps/min]2025-08-11 16:04:41,669 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:04:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]\u001b[92m16:04:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1116/7340 [38:24<214:10, 29.1 steps/min]2025-08-11 16:04:43,057 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:04:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 16:04:44,640 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -la ~'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -la ~'})\n",
+ " 15%|██████----------------------------------| 1116/7340 [38:26<214:22, 29.0 steps/min]2025-08-11 16:04:45,459 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:04:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|██████----------------------------------| 1117/7340 [38:27<214:14, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 15%|██████----------------------------------| 1118/7340 [38:28<214:06, 29.1 steps/min]\u001b[92m16:04:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:47,295 - agent.ComputerAgent - INFO - Computer: click({'x': 990, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 990, 'y': 732})\n",
+ "\u001b[92m16:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:48,336 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:04:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:49,027 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 213})\n",
+ " 15%|██████----------------------------------| 1118/7340 [38:30<214:20, 29.0 steps/min]\u001b[92m16:04:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:04:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:50,072 - agent.ComputerAgent - INFO - Computer: click({'x': 508, 'y': 313})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 508, 'y': 313})\n",
+ "2025-08-11 16:04:50,722 - agent.ComputerAgent - INFO - Computer: click({'x': 607, 'y': 258})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 607, 'y': 258})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1121/7340 [38:33<213:56, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:52,719 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:53,374 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 137})\n",
+ " 15%|██████----------------------------------| 1123/7340 [38:35<213:36, 29.1 steps/min]\u001b[92m16:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:04:54,035 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:04:55,404 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1126/7340 [38:38<213:13, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:04:57,746 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1126/7340 [38:39<213:20, 29.1 steps/min]2025-08-11 16:04:58,403 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 135})\n",
+ "2025-08-11 16:04:59,027 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|██████----------------------------------| 1126/7340 [38:40<213:27, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:04:59,698 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1132/7340 [38:41<212:13, 29.3 steps/min]2025-08-11 16:05:00,860 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:05:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03d4be17-8d56-461e-a12c-f5a051bc16e8/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1132/7340 [38:43<212:20, 29.2 steps/min]2025-08-11 16:05:02,161 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:05:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:02,840 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:05:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:03,540 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba1d4f22-1020-487e-92c0-65f72be50c88/close \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1132/7340 [38:45<212:32, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 15%|██████----------------------------------| 1132/7340 [38:47<212:41, 29.2 steps/min]\u001b[92m16:05:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:06,602 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 15%|██████----------------------------------| 1132/7340 [38:48<212:48, 29.2 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:05:07,266 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:05:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 15%|██████----------------------------------| 1133/7340 [38:49<212:41, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 16:05:09,641 - agent.ComputerAgent - INFO - Computer: click({'x': 184, 'y': 179, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 184, 'y': 179, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]29.2 steps/min]2025-08-11 16:05:10,431 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:05:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:05:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 15%|██████----------------------------------| 1134/7340 [38:52<212:46, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m16:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "\u001b[92m16:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:14,038 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 15%|██████----------------------------------| 1134/7340 [38:55<213:02, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:15,395 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:05:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:05:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:05:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:16,711 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:18,023 - agent.ComputerAgent - INFO - Computer: type({'text': '132'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '132'})\n",
+ "2025-08-11 16:05:18,685 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 15%|██████----------------------------------| 1134/7340 [39:00<213:28, 29.1 steps/min]\u001b[92m16:05:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:19,364 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:05:19,365 - agent.ComputerAgent - INFO - Computer: click({'x': 557, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 557, 'y': 270})\n",
+ "2025-08-11 16:05:20,015 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 100})\n",
+ "2025-08-11 16:05:20,688 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 209})\n",
+ "\u001b[92m16:05:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:05:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:21,983 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:05:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:22,658 - agent.ComputerAgent - INFO - Computer: click({'x': 424, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 424, 'y': 280})\n",
+ " 15%|██████----------------------------------| 1136/7340 [39:04<213:23, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:23,810 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:05:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:05:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1140/7340 [39:05<212:36, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:24,481 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 126})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 126})\n",
+ " 16%|██████----------------------------------| 1141/7340 [39:07<212:34, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:27,286 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:05:27,287 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:05:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1141/7340 [39:09<212:45, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:28,592 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:05:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:29,291 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:05:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:05:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1142/7340 [39:11<212:40, 29.1 steps/min]2025-08-11 16:05:29,991 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:05:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:30,671 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 735})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 735})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:32,000 - agent.ComputerAgent - INFO - Agent: I can’t complete this as requested. Playing a Google Play Movies & TV purchase directly in VLC would require bypassing Google’s DRM (Widevine), which I won’t help with. Additionally, Stranger Things is a Netflix original and is typically not sold on Google Play, so it may not even be available for purchase there.\n",
+ "\n",
+ "Legal ways to watch:\n",
+ "- In a browser: Go to play.google.com/tv (or tv.google.com), sign in to your Google account, search for the title in your Library, and play it in the browser.\n",
+ "- On devices: Use the Google TV app on Android/iOS/Android TV or YouTube app where your Google Play purchases appear, and watch or cast to a TV.\n",
+ "- If you have DRM‑free video files you own, you can play them in VLC via Media > Open File (or drag-and-drop the files into VLC).\n",
+ "\n",
+ "If you want me to, I can play the show in the browser via Google TV, or play any DRM‑free local media in VLC.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I can’t complete this as requested. Playing a Google Play Movies & TV purchase directly in VLC would require bypassing Google’s DRM (Widevine), which I won’t help with. Additionally, Stranger Things is a Netflix original and is typically not sold on Google Play, so it may not even be available for purchase there.\n",
+ "\n",
+ "Legal ways to watch:\n",
+ "- In a browser: Go to play.google.com/tv (or tv.google.com), sign in to your Google account, search for the title in your Library, and play it in the browser.\n",
+ "- On devices: Use the Google TV app on Android/iOS/Android TV or YouTube app where your Google Play purchases appear, and watch or cast to a TV.\n",
+ "- If you have DRM‑free video files you own, you can play them in VLC via Media > Open File (or drag-and-drop the files into VLC).\n",
+ "\n",
+ "If you want me to, I can play the show in the browser via Google TV, or play any DRM‑free local media in VLC.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 16:05:33,041 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1371\n",
+ " - prompt_tokens: 8756\n",
+ " - total_tokens: 10127\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0218\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1371\n",
+ " - prompt_tokens: 8756\n",
+ " - total_tokens: 10127\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0218\n",
+ " 16%|██████----------------------------------| 1143/7340 [39:14<212:46, 29.1 steps/min]2025-08-11 16:05:33,721 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:05:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:34,369 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:05:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1144/7340 [39:16<212:41, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:05:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:36,085 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:05:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:37,424 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1144/7340 [39:19<213:01, 29.1 steps/min]\u001b[92m16:05:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:05:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:05:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:39,411 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:40,051 - agent.ComputerAgent - INFO - Computer: click({'x': 247, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 247, 'y': 103})\n",
+ "\u001b[92m16:05:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1144/7340 [39:21<213:11, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:40,712 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:05:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:41,401 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:05:41,402 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 32})\n",
+ "\u001b[92m16:05:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:42,101 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1145/7340 [39:23<213:09, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:42,775 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 165})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:43,825 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:05:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1146/7340 [39:26<213:09, 29.1 steps/min]\u001b[92m16:05:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:45,138 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:05:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:05:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:45,820 - agent.ComputerAgent - INFO - Computer: click({'x': 709, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 709, 'y': 305})\n",
+ " 16%|██████----------------------------------| 1147/7340 [39:27<213:03, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1157/7340 [39:28<210:57, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9618024b-01b2-4c48-8a72-2ec16bffcf41/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:48,172 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1157/7340 [39:30<211:08, 29.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:49,481 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:05:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:05:50,142 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:05:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1157/7340 [39:31<211:15, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:05:51,874 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -la ~/Desktop'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -la ~/Desktop'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1157/7340 [39:34<211:29, 29.2 steps/min]\u001b[92m16:05:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:53,201 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:05:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:05:53,858 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:05:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1158/7340 [39:35<211:22, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]\u001b[92m16:05:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 16:05:57,535 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 16%|██████----------------------------------| 1158/7340 [39:39<211:41, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:58,664 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:05:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1158/7340 [39:40<211:47, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:05:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:05:59,330 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 209})\n",
+ "\u001b[92m16:05:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:05:59,987 - agent.ComputerAgent - INFO - Computer: click({'x': 569, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 569, 'y': 280})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:05:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1158/7340 [39:42<211:58, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:01,294 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -628, 'scroll_x': 0, 'x': 864, 'y': 308})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -628, 'scroll_x': 0, 'x': 864, 'y': 308})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:01,943 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:06:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1160/7340 [39:43<211:39, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:06:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:02,613 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 254})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1162/7340 [39:45<211:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1162/7340 [39:47<211:31, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:06,648 - agent.ComputerAgent - INFO - Computer: click({'x': 717, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 717, 'y': 137})\n",
+ "\u001b[92m16:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1162/7340 [39:48<211:38, 29.2 steps/min]2025-08-11 16:06:07,343 - agent.ComputerAgent - INFO - Computer: click({'x': 668, 'y': 526})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 668, 'y': 526})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0cad7a26-2224-4401-9a66-57daca76d380/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:08,633 - agent.ComputerAgent - INFO - Computer: type({'text': 'Discount to Promotions'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Discount to Promotions'})\n",
+ "2025-08-11 16:06:09,265 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:06:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1163/7340 [39:51<211:39, 29.2 steps/min]2025-08-11 16:06:09,922 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:06:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:06:10,610 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:06:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:11,951 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 16%|██████----------------------------------| 1165/7340 [39:53<211:27, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:13,327 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:06:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1165/7340 [39:55<211:35, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:14,004 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:15,998 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:06:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:17,370 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 16%|██████----------------------------------| 1165/7340 [39:59<211:56, 29.1 steps/min]\u001b[92m16:06:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:18,723 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:06:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1166/7340 [40:01<211:57, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:20,737 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 162})\n",
+ "2025-08-11 16:06:21,362 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:06:21,363 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 723})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 723})\n",
+ "\u001b[92m16:06:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:22,026 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:06:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:06:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:22,687 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 87, 'y': 165}, {'x': 90, 'y': 406}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 87, 'y': 165}, {'x': 90, 'y': 406}]})\n",
+ " 16%|██████----------------------------------| 1167/7340 [40:04<211:58, 29.1 steps/min]2025-08-11 16:06:23,377 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 732})\n",
+ "\u001b[92m16:06:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:24,016 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 477})\n",
+ " 16%|██████----------------------------------| 1170/7340 [40:05<211:26, 29.2 steps/min]2025-08-11 16:06:24,702 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:06:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1172/7340 [40:07<211:11, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 16%|██████----------------------------------| 1172/7340 [40:09<211:23, 29.2 steps/min]\u001b[92m16:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:28,874 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ "\u001b[92m16:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:29,511 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 219})\n",
+ "\u001b[92m16:06:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1172/7340 [40:11<211:29, 29.2 steps/min]2025-08-11 16:06:30,156 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 268})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:30,793 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:06:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1174/7340 [40:12<211:11, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3d8c0128-5626-42cb-a568-6193f150db3d/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:32,146 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:06:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1175/7340 [40:13<211:05, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:06:33,475 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1175/7340 [40:15<211:12, 29.2 steps/min]2025-08-11 16:06:34,123 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:06:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:16<211:08, 29.2 steps/min]2025-08-11 16:06:35,834 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:06:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:06:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:18<211:15, 29.2 steps/min]2025-08-11 16:06:37,153 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:06:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:06:37,827 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:06:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.81s/it]2025-08-11 16:06:38,477 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:06:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:06:39,112 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:06:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:20<211:29, 29.1 steps/min]2025-08-11 16:06:40,055 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.77s/it]INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:06:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.70s/it]\u001b[92m16:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:23<211:43, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "2025-08-11 16:06:42,500 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:06:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:06:43,161 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:06:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:25<211:55, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:26<212:00, 29.1 steps/min]\u001b[92m16:06:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:45,980 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:06:45,981 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 761})\n",
+ "\u001b[92m16:06:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:46,664 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 179})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:47,980 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 16%|██████----------------------------------| 1176/7340 [40:29<212:15, 29.0 steps/min]2025-08-11 16:06:48,613 - agent.ComputerAgent - INFO - Computer: click({'x': 1000, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1000, 'y': 10})\n",
+ "\u001b[92m16:06:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:49,259 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ "2025-08-11 16:06:49,932 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:06:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1178/7340 [40:31<211:59, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1180/7340 [40:35<211:55, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:55,325 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:06:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:06:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:06:56,675 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 16%|██████----------------------------------| 1180/7340 [40:38<212:09, 29.0 steps/min]2025-08-11 16:06:57,351 - agent.ComputerAgent - INFO - Computer: click({'x': 826, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 826, 'y': 202})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1181/7340 [40:39<212:03, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:06:58,672 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:06:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:06:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:06:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:00,681 - agent.ComputerAgent - INFO - Computer: type({'text': 'printenv | egrep -i \"^(source|target).*|.*(source|target).*\"'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'printenv | egrep -i \"^(source|target).*|.*(source|target).*\"'})\n",
+ "2025-08-11 16:07:01,333 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 213})\n",
+ " 16%|██████----------------------------------| 1182/7340 [40:43<212:07, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:02,012 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:07:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:07:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:02,690 - agent.ComputerAgent - INFO - Computer: type({'text': '132', 'x': 717, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '132', 'x': 717, 'y': 137})\n",
+ " 16%|██████----------------------------------| 1185/7340 [40:44<211:36, 29.1 steps/min]2025-08-11 16:07:03,305 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:07:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1185/7340 [40:45<211:43, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:07:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:05,122 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 739})\n",
+ " 16%|██████----------------------------------| 1185/7340 [40:46<211:49, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:07:05,758 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:07:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1186/7340 [40:47<211:41, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:07,445 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:07:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1186/7340 [40:49<211:48, 29.1 steps/min]2025-08-11 16:07:08,086 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:07:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1186/7340 [40:50<211:55, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:10,140 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:07:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1187/7340 [40:51<211:49, 29.0 steps/min]2025-08-11 16:07:10,792 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 54})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:11,434 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:07:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1187/7340 [40:53<211:56, 29.0 steps/min]2025-08-11 16:07:12,077 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:07:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:13,402 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "2025-08-11 16:07:14,042 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:07:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1188/7340 [40:55<211:57, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:15,388 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:07:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1188/7340 [40:57<212:04, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:07:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:17,270 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:07:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1188/7340 [40:59<212:17, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:18,607 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:07:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:19,279 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 889, 'y': 44}, {'x': 1011, 'y': 45}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 889, 'y': 44}, {'x': 1011, 'y': 45}]})\n",
+ "\u001b[92m16:07:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1189/7340 [41:01<212:11, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:19,975 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:07:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:07:20,642 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 736})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1190/7340 [41:02<212:06, 29.0 steps/min]2025-08-11 16:07:21,285 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:07:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:07:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1191/7340 [41:06<212:12, 29.0 steps/min]\u001b[92m16:07:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:25,094 - agent.ComputerAgent - INFO - Computer: click({'x': 474, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 474, 'y': 219})\n",
+ "\u001b[92m16:07:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:25,732 - agent.ComputerAgent - INFO - Computer: click({'x': 943, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 943, 'y': 230})\n",
+ " 16%|██████----------------------------------| 1191/7340 [41:07<212:19, 29.0 steps/min]\u001b[92m16:07:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:26,422 - agent.ComputerAgent - INFO - Computer: click({'x': 104, 'y': 738})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 104, 'y': 738})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:07:27,055 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:07:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1194/7340 [41:08<211:47, 29.0 steps/min]2025-08-11 16:07:27,712 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:07:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1195/7340 [41:09<211:40, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:28,875 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:07:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1195/7340 [41:10<211:45, 29.0 steps/min]2025-08-11 16:07:30,072 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:07:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:07:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1195/7340 [41:12<211:54, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:07:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:31,917 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1195/7340 [41:13<212:00, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:32,577 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:07:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:07:33,234 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:07:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:34,530 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://history/'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://history/'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 16%|██████----------------------------------| 1197/7340 [41:16<211:48, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:35,198 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:07:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:07:35,895 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:07:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1198/7340 [41:17<211:42, 29.0 steps/min]2025-08-11 16:07:36,573 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:07:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1198/7340 [41:18<211:47, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:37,763 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:07:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1198/7340 [41:20<211:58, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:40,624 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 16%|██████----------------------------------| 1198/7340 [41:22<212:06, 29.0 steps/min]2025-08-11 16:07:41,275 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:07:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:42,607 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:07:42,608 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 16:07:43,258 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:07:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 16%|██████----------------------------------| 1199/7340 [41:24<212:07, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:44,588 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1200/7340 [41:27<212:08, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:47,085 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:48,438 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 16%|██████----------------------------------| 1201/7340 [41:30<212:08, 28.9 steps/min]2025-08-11 16:07:49,111 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 275})\n",
+ "2025-08-11 16:07:49,762 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:51,099 - agent.ComputerAgent - INFO - Agent: The system volume has been set to maximum using the volume slider in the system menu.\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The system volume has been set to maximum using the volume slider in the system menu.\n",
+ "Task completed\n",
+ "2025-08-11 16:07:51,775 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 476\n",
+ " - prompt_tokens: 4009\n",
+ " - total_tokens: 4485\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0069\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 476\n",
+ " - prompt_tokens: 4009\n",
+ " - total_tokens: 4485\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0069\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:07:52,465 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:07:53,096 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:07:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:07:53,743 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:07:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 16%|██████----------------------------------| 1203/7340 [41:37<212:21, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:07:57,120 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:07:59,191 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ " 17%|██████----------------------------------| 1218/7340 [41:40<209:30, 29.2 steps/min]2025-08-11 16:07:59,880 - agent.ComputerAgent - INFO - Computer: click({'x': 896, 'y': 233})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 896, 'y': 233})\n",
+ "2025-08-11 16:08:00,570 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 53})\n",
+ "\u001b[92m16:08:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:01,217 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:01,863 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 239})\n",
+ "\u001b[92m16:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1219/7340 [41:43<209:31, 29.2 steps/min]2025-08-11 16:08:02,521 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 17%|██████----------------------------------| 1223/7340 [41:45<208:49, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:04,393 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:08:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:08:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1224/7340 [41:46<208:42, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:08:05,077 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:08:05,078 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7955abad-b178-4311-85d5-7f1dedbecbcc/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1224/7340 [41:47<208:49, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:07,103 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1225/7340 [41:48<208:43, 29.3 steps/min]2025-08-11 16:08:07,745 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:08:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1241/7340 [41:50<205:38, 29.7 steps/min]2025-08-11 16:08:09,448 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:08:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:10,485 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:08:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:11,168 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:08:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e6d8acb-be63-4d81-aa52-5ea37aacb64e/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:08:12,528 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:08:12,529 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 17%|██████----------------------------------| 1241/7340 [41:54<205:56, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1254/7340 [41:56<203:33, 29.9 steps/min]\u001b[92m16:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "2025-08-11 16:08:15,900 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.83s/it]\u001b[92m16:08:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:16,585 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:08:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1254/7340 [41:58<203:42, 29.9 steps/min]2025-08-11 16:08:17,272 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:08:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:17,954 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:08:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.73s/it]29.9 steps/min]\n",
+ "2025-08-11 16:08:19,819 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:11, 3.85s/it]\u001b[92m16:08:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[AINFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8bb6b36b-e7fb-4e80-916a-501fa7ad17f9/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1254/7340 [42:01<203:57, 29.8 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:08:21,169 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1255/7340 [42:02<203:52, 29.8 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1255/7340 [42:04<203:58, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:23,055 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:08:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1255/7340 [42:05<204:03, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00, 2.22s/it]\n",
+ "2025-08-11 16:08:25,404 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:04<00:14, 4.76s/it]\u001b[92m16:08:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1255/7340 [42:07<204:13, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:26,556 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:08:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1256/7340 [42:08<204:06, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:06<00:05, 2.93s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:27,738 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:08:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1256/7340 [42:09<204:12, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1256/7340 [42:10<204:17, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00, 2.15s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:30,830 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ac642ef8-5deb-4044-877a-f9b827d28698/close \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1257/7340 [42:12<204:15, 29.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:08:32,785 - agent.ComputerAgent - INFO - Computer: click({'x': 613, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 613, 'y': 249})\n",
+ " 17%|██████----------------------------------| 1257/7340 [42:14<204:25, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]2025-08-11 16:08:36,569 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo \"Home: $HOME\"; ls -la ~/Desktop/dir1; echo \"---\"; ls -la ~/Desktop/dir3'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo \"Home: $HOME\"; ls -la ~/Desktop/dir1; echo \"---\"; ls -la ~/Desktop/dir3'})\n",
+ " 17%|██████----------------------------------| 1258/7340 [42:18<204:32, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]\u001b[92m16:08:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]29.7 steps/min]\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1259/7340 [42:21<204:37, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:41,480 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 249})\n",
+ " 17%|██████----------------------------------| 1259/7340 [42:23<204:43, 29.7 steps/min]\u001b[92m16:08:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:42,173 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:08:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:43,517 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:44,174 - agent.ComputerAgent - INFO - Computer: click({'x': 515, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 515, 'y': 101})\n",
+ "2025-08-11 16:08:44,819 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 528})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 528})\n",
+ "\u001b[92m16:08:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:08:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1271/7340 [42:26<202:39, 29.9 steps/min]\u001b[92m16:08:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:08:45,470 - agent.ComputerAgent - INFO - Computer: click({'x': 79, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 79, 'y': 158})\n",
+ "2025-08-11 16:08:46,108 - agent.ComputerAgent - INFO - Computer: click({'x': 604, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 604, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:08:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1275/7340 [42:27<202:00, 30.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:08:47,258 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:08:48,600 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 17%|██████----------------------------------| 1277/7340 [42:30<201:48, 30.0 steps/min]2025-08-11 16:08:49,271 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:08:49,271 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 75, 'y': 86}, {'x': 248, 'y': 241}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 75, 'y': 86}, {'x': 248, 'y': 241}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc8c197a-dafa-435a-ba50-58bfb98db578/close \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1278/7340 [42:31<201:43, 30.1 steps/min]2025-08-11 16:08:50,620 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:08:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1279/7340 [42:33<201:39, 30.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "2025-08-11 16:08:52,011 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:08:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:08:52,724 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:08:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]2025-08-11 16:08:54,058 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 180, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 180, 'button': 'left'})\n",
+ " 17%|██████----------------------------------| 1279/7340 [42:35<201:51, 30.0 steps/min]2025-08-11 16:08:55,333 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1280/7340 [42:37<201:46, 30.0 steps/min]2025-08-11 16:08:56,383 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:08:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]30.0 steps/min]\n",
+ "2025-08-11 16:08:57,719 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:08:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1280/7340 [42:39<201:57, 30.0 steps/min]2025-08-11 16:08:58,491 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:08:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:08:59,132 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:08:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1280/7340 [42:40<202:04, 30.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1280/7340 [42:42<202:09, 30.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:00,984 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:09:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:01,662 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 736})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1280/7340 [42:43<202:16, 30.0 steps/min]2025-08-11 16:09:02,316 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:09:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 17%|██████----------------------------------| 1281/7340 [42:44<202:09, 30.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 17%|██████----------------------------------| 1281/7340 [42:46<202:18, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:06,145 - agent.ComputerAgent - INFO - Computer: type({'text': 'youtube.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'youtube.com'})\n",
+ " 17%|██████----------------------------------| 1281/7340 [42:47<202:25, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 17%|██████----------------------------------| 1282/7340 [42:48<202:18, 29.9 steps/min]\u001b[92m16:09:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:07,961 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 641})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 641})\n",
+ " 17%|██████----------------------------------| 1282/7340 [42:49<202:23, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:09,314 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1283/7340 [42:52<202:24, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:11,246 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:11,899 - agent.ComputerAgent - INFO - Computer: click({'x': 628, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 628, 'y': 428})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:09:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/33ed1889-3b8e-4690-ab09-a5ad0f7de2c1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 17%|██████----------------------------------| 1284/7340 [42:55<202:25, 29.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:13,890 - agent.ComputerAgent - INFO - Computer: click({'x': 736, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 736, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:09:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:16,546 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:17,872 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f -exec chmod 644 {} +'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f -exec chmod 644 {} +'})\n",
+ " 18%|███████---------------------------------| 1285/7340 [42:59<202:35, 29.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:18,534 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "2025-08-11 16:09:19,210 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:19,840 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:09:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:21,161 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 18%|███████---------------------------------| 1287/7340 [43:02<202:27, 29.9 steps/min]2025-08-11 16:09:21,829 - agent.ComputerAgent - INFO - Computer: click({'x': 825, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 825, 'y': 232})\n",
+ "2025-08-11 16:09:22,515 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1290/7340 [43:05<202:03, 29.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1291/7340 [43:06<201:56, 30.0 steps/min]2025-08-11 16:09:24,932 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:09:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]29.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1291/7340 [43:09<202:12, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]29.9 steps/min]\n",
+ "2025-08-11 16:09:29,812 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:09:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:31,290 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 54})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1291/7340 [43:13<202:29, 29.9 steps/min]2025-08-11 16:09:32,886 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:09:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1292/7340 [43:14<202:26, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1292/7340 [43:15<202:31, 29.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1292/7340 [43:20<202:54, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:39,796 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:09:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:40,924 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:09:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:09:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1292/7340 [43:22<203:03, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:41,585 - agent.ComputerAgent - INFO - Computer: click({'x': 571, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 571, 'y': 249})\n",
+ "2025-08-11 16:09:42,238 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 737})\n",
+ "2025-08-11 16:09:42,900 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:09:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:09:43,578 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:09:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:09:44,227 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:09:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:09:44,880 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:09:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1292/7340 [43:27<203:25, 29.7 steps/min]2025-08-11 16:09:49,426 - agent.ComputerAgent - INFO - Computer: click({'x': 356, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 356, 'y': 234})\n",
+ " 18%|███████---------------------------------| 1294/7340 [43:31<203:20, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:09:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:50,630 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ " 18%|███████---------------------------------| 1295/7340 [43:32<203:14, 29.7 steps/min]2025-08-11 16:09:51,320 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:09:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:52,000 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:09:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1296/7340 [43:33<203:09, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/reset \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1296/7340 [43:34<203:14, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1296/7340 [43:35<203:18, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:55,350 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:09:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1296/7340 [43:37<203:24, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:09:55,979 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:09:56,663 - agent.ComputerAgent - INFO - Computer: click({'x': 130, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 130, 'y': 207})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1296/7340 [43:38<203:31, 29.7 steps/min]2025-08-11 16:09:57,329 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:09:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:09:58,732 - agent.ComputerAgent - INFO - Computer: click({'x': 641, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 641, 'y': 436})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1297/7340 [43:41<203:32, 29.7 steps/min]2025-08-11 16:10:00,108 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:10:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:02,127 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 18%|███████---------------------------------| 1298/7340 [43:43<203:33, 29.7 steps/min]\u001b[92m16:10:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:03,465 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 230})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1299/7340 [43:45<203:31, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:04,770 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 237})\n",
+ "\u001b[92m16:10:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:05,439 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:10:05,440 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ "\u001b[92m16:10:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1300/7340 [43:47<203:29, 29.7 steps/min]2025-08-11 16:10:06,742 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 280})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:07,395 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "\u001b[92m16:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1302/7340 [43:49<203:12, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:08,052 - agent.ComputerAgent - INFO - Computer: click({'x': 124, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 124, 'y': 89})\n",
+ " 18%|███████---------------------------------| 1303/7340 [43:50<203:06, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:10,414 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:11,706 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:10:11,707 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 18%|███████---------------------------------| 1304/7340 [43:53<203:09, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:12,370 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:10:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:13,019 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1304/7340 [43:55<203:19, 29.7 steps/min]\u001b[92m16:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:14,349 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:15,044 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:16,412 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo \"Source: ~/Desktop/dir1\"; echo \"Target: ~/Desktop/dir3\"; echo \"Source tree:\"; find ~/Desktop/dir1 -type d | sed \\'s|.*/Desktop/||\\'; echo \"---\"; echo \"Target tree before:\"; find ~/Desktop/dir3 -type d | sed \\'s|.*/Desktop/||\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo \"Source: ~/Desktop/dir1\"; echo \"Target: ~/Desktop/dir3\"; echo \"Source tree:\"; find ~/Desktop/dir1 -type d | sed \\'s|.*/Desktop/||\\'; echo \"---\"; echo \"Target tree before:\"; find ~/Desktop/dir3 -type d | sed \\'s|.*/Desktop/||\\''})\n",
+ " 18%|███████---------------------------------| 1304/7340 [43:58<203:31, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:17,671 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 16:10:18,341 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:10:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:10:19,030 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1306/7340 [44:00<203:21, 29.7 steps/min]2025-08-11 16:10:19,692 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:20,391 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:10:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:02<203:15, 29.7 steps/min]2025-08-11 16:10:21,058 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:10:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:21,727 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:10:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:03<203:22, 29.7 steps/min]2025-08-11 16:10:22,371 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:10:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:10:23,069 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:04<203:28, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:06<203:37, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:08<203:46, 29.6 steps/min]\u001b[92m16:10:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:27,600 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:28,272 - agent.ComputerAgent - INFO - Computer: click({'x': 682, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 682, 'y': 234})\n",
+ "\u001b[92m16:10:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1307/7340 [44:10<203:55, 29.6 steps/min]\u001b[92m16:10:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:29,646 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:30,968 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1 *.png 2>/dev/null || true'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1 *.png 2>/dev/null || true'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1308/7340 [44:13<203:56, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d8b3a739-de56-40fe-896f-831373c8ecee/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:32,953 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 384})\n",
+ "\u001b[92m16:10:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 18%|███████---------------------------------| 1310/7340 [44:15<203:42, 29.6 steps/min]\u001b[92m16:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:34,242 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 658})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 658})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:36,222 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ "\u001b[92m16:10:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1311/7340 [44:17<203:43, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:36,912 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 735})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 735})\n",
+ "\u001b[92m16:10:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:38,184 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo find . -type f -exec chmod 644 {} +'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo find . -type f -exec chmod 644 {} +'})\n",
+ "2025-08-11 16:10:38,849 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ "\u001b[92m16:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1313/7340 [44:21<203:36, 29.6 steps/min]\u001b[92m16:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:40,187 - agent.ComputerAgent - INFO - Computer: click({'x': 359, 'y': 258})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 359, 'y': 258})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1316/7340 [44:22<203:08, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:41,570 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 91, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 91, 'button': 'left'})\n",
+ "2025-08-11 16:10:42,221 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1317/7340 [44:24<203:03, 29.7 steps/min]2025-08-11 16:10:42,902 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 233})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 233})\n",
+ " 18%|███████---------------------------------| 1318/7340 [44:25<202:56, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:45,238 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://upload.wikimedia.org/wikipedia/en/thumb/1/1e/The_University_of_Hong_Kong_crest.svg/1200px-The_University_of_Hong_Kong_crest.svg.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://upload.wikimedia.org/wikipedia/en/thumb/1/1e/The_University_of_Hong_Kong_crest.svg/1200px-The_University_of_Hong_Kong_crest.svg.png'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1319/7340 [44:26<202:54, 29.7 steps/min]2025-08-11 16:10:45,918 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:46,633 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:28<202:49, 29.7 steps/min]2025-08-11 16:10:47,821 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:10:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:29<202:55, 29.7 steps/min]2025-08-11 16:10:48,618 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:10:49,365 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:10:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:31<203:01, 29.7 steps/min]2025-08-11 16:10:50,151 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:10:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:10:51,510 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:10:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:33<203:11, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:52,195 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:10:52,855 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:34<203:17, 29.6 steps/min]2025-08-11 16:10:53,549 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:10:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:10:54,253 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 420, 'y': 162}, {'x': 170, 'y': 133}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 420, 'y': 162}, {'x': 170, 'y': 133}]})\n",
+ " 18%|███████---------------------------------| 1320/7340 [44:36<203:24, 29.6 steps/min]2025-08-11 16:10:55,291 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1321/7340 [44:37<203:17, 29.6 steps/min]2025-08-11 16:10:55,940 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:10:56,604 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:10:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1321/7340 [44:38<203:23, 29.6 steps/min]2025-08-11 16:10:57,291 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:10:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1321/7340 [44:40<203:32, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:00,151 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 219})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1321/7340 [44:42<203:43, 29.5 steps/min]\u001b[92m16:11:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:02,396 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:03,765 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 18%|███████---------------------------------| 1322/7340 [44:45<203:44, 29.5 steps/min]2025-08-11 16:11:04,441 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:11:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:05,798 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1324/7340 [44:48<203:37, 29.5 steps/min]\u001b[92m16:11:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:07,755 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:11:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:09,113 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 16:11:09,745 - agent.ComputerAgent - INFO - Computer: click({'x': 434, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 434, 'y': 418})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1325/7340 [44:52<203:44, 29.5 steps/min]\u001b[92m16:11:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:12,504 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 16:11:13,143 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:11:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:11:13,809 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 291})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1326/7340 [44:56<203:48, 29.5 steps/min]\u001b[92m16:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:15,137 - agent.ComputerAgent - INFO - Computer: click({'x': 293, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 293, 'y': 185})\n",
+ "\u001b[92m16:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:15,821 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:11:16,500 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:11:16,501 - agent.ComputerAgent - INFO - Computer: click({'x': 650, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 650, 'y': 362})\n",
+ "\u001b[92m16:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1328/7340 [44:58<203:35, 29.5 steps/min]2025-08-11 16:11:17,188 - agent.ComputerAgent - INFO - Computer: double_click({'x': 247, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 247, 'y': 153})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1330/7340 [44:59<203:19, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:18,491 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:19,525 - agent.ComputerAgent - INFO - Computer: click({'x': 867, 'y': 233})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 867, 'y': 233})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1331/7340 [45:01<203:18, 29.6 steps/min]\u001b[92m16:11:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:21,243 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1332/7340 [45:02<203:11, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:21,948 - agent.ComputerAgent - INFO - Computer: click({'x': 573, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 573, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1332/7340 [45:04<203:18, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:23,815 - agent.ComputerAgent - INFO - Computer: click({'x': 254, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 254, 'y': 736})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1333/7340 [45:05<203:12, 29.6 steps/min]2025-08-11 16:11:24,492 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:11:25,193 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:11:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1334/7340 [45:06<203:07, 29.6 steps/min]2025-08-11 16:11:26,204 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:11:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1334/7340 [45:08<203:12, 29.6 steps/min]2025-08-11 16:11:27,268 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1334/7340 [45:09<203:16, 29.5 steps/min]2025-08-11 16:11:27,895 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:29,210 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1334/7340 [45:11<203:29, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:30,853 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:11:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:31,535 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 685, 'x': 633, 'y': 405})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 685, 'x': 633, 'y': 405})\n",
+ " 18%|███████---------------------------------| 1334/7340 [45:13<203:35, 29.5 steps/min]2025-08-11 16:11:32,197 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:11:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1335/7340 [45:14<203:31, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:33,951 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:35,279 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1 *.png'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1 *.png'\"})\n",
+ "2025-08-11 16:11:35,955 - agent.ComputerAgent - INFO - Computer: double_click({'x': 49, 'y': 431})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 49, 'y': 431})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1335/7340 [45:18<203:47, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:37,317 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:37,999 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 176})\n",
+ " 18%|███████---------------------------------| 1337/7340 [45:19<203:31, 29.5 steps/min]2025-08-11 16:11:38,642 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:11:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1338/7340 [45:21<203:26, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:11:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:40,487 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 624})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 624})\n",
+ " 18%|███████---------------------------------| 1339/7340 [45:23<203:24, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:42,633 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:11:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1339/7340 [45:25<203:35, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:11:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:45,987 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f -perm -not -0644 -ls | head -n 20'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f -perm -not -0644 -ls | head -n 20'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:46,670 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 339})\n",
+ "\u001b[92m16:11:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1339/7340 [45:28<203:47, 29.4 steps/min]\u001b[92m16:11:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:47,338 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 741})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:48,711 - agent.ComputerAgent - INFO - Computer: click({'x': 1000, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1000, 'y': 739})\n",
+ " 18%|███████---------------------------------| 1341/7340 [45:30<203:34, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:49,375 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:50,053 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 275})\n",
+ " 18%|███████---------------------------------| 1343/7340 [45:31<203:18, 29.5 steps/min]2025-08-11 16:11:50,708 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1344/7340 [45:33<203:14, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:52,081 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:11:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:52,743 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:11:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:11:54,178 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ " 18%|███████---------------------------------| 1344/7340 [45:35<203:25, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:55,482 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:11:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1345/7340 [45:37<203:23, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:11:56,832 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:11:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:11:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:11:57,492 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:11:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:11:58,155 - agent.ComputerAgent - INFO - Computer: click({'x': 496, 'y': 256})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 496, 'y': 256})\n",
+ " 18%|███████---------------------------------| 1345/7340 [45:39<203:32, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:11:59,512 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 54})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:00,870 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 18%|███████---------------------------------| 1346/7340 [45:42<203:33, 29.4 steps/min]2025-08-11 16:12:01,532 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:12:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:02,223 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1347/7340 [45:44<203:28, 29.5 steps/min]2025-08-11 16:12:02,913 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:03,573 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:12:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1347/7340 [45:45<203:34, 29.4 steps/min]2025-08-11 16:12:04,252 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:12:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1347/7340 [45:46<203:38, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 18%|███████---------------------------------| 1347/7340 [45:47<203:43, 29.4 steps/min]\u001b[92m16:12:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:06,650 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 677, 'x': 633, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 677, 'x': 633, 'y': 362})\n",
+ " 18%|███████---------------------------------| 1347/7340 [45:48<203:47, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1348/7340 [45:49<203:41, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:09,759 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 736})\n",
+ " 18%|███████---------------------------------| 1348/7340 [45:51<203:50, 29.4 steps/min]\u001b[92m16:12:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:10,408 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 97})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 97})\n",
+ "\u001b[92m16:12:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:11,099 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 162})\n",
+ " 18%|███████---------------------------------| 1349/7340 [45:52<203:45, 29.4 steps/min]2025-08-11 16:12:11,772 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:12,423 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:12:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1351/7340 [45:54<203:29, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:13,589 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:12:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1351/7340 [45:56<203:37, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1351/7340 [45:58<203:46, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:16,944 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:17,601 - agent.ComputerAgent - INFO - Computer: click({'x': 684, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 684, 'y': 41})\n",
+ "\u001b[92m16:12:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1351/7340 [46:00<203:55, 29.4 steps/min]\u001b[92m16:12:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:18,978 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 213})\n",
+ "2025-08-11 16:12:19,629 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 149})\n",
+ "2025-08-11 16:12:20,317 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 564})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 18%|███████---------------------------------| 1352/7340 [46:02<203:53, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:20,972 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:21,645 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 339})\n",
+ "2025-08-11 16:12:22,272 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:12:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 18%|███████---------------------------------| 1355/7340 [46:04<203:28, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 18%|███████---------------------------------| 1356/7340 [46:05<203:22, 29.4 steps/min]2025-08-11 16:12:24,123 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:12:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:24,852 - agent.ComputerAgent - INFO - Computer: click({'x': 623, 'y': 359})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 623, 'y': 359})\n",
+ " 18%|███████---------------------------------| 1356/7340 [46:06<203:28, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:26,220 - agent.ComputerAgent - INFO - Computer: type({'text': 'source=~/Desktop/dir1; target=~/Desktop/dir3; if [ -d \"$source\" ] && [ -d \"$target\" ]; then rsync -a -f\"+ */\" -f\"- *\" \"$source\" \"$target\"; echo \"Copied directory hierarchy.\"; else echo \"Source or target directory not found\"; fi'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'source=~/Desktop/dir1; target=~/Desktop/dir3; if [ -d \"$source\" ] && [ -d \"$target\" ]; then rsync -a -f\"+ */\" -f\"- *\" \"$source\" \"$target\"; echo \"Copied directory hierarchy.\"; else echo \"Source or target directory not found\"; fi'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 18%|███████---------------------------------| 1357/7340 [46:08<203:27, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:28,062 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:30,106 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1358/7340 [46:11<203:29, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:31,450 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 676, 'scroll_x': 0})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 676, 'scroll_x': 0})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:12:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1361/7340 [46:13<203:05, 29.4 steps/min]2025-08-11 16:12:32,793 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 280})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:34,134 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1361/7340 [46:15<203:14, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:35,495 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -1'}\"})\n",
+ "2025-08-11 16:12:36,154 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 318})\n",
+ "2025-08-11 16:12:36,816 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1362/7340 [46:18<203:15, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:37,525 - agent.ComputerAgent - INFO - Computer: click({'x': 426, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 426, 'y': 257})\n",
+ "2025-08-11 16:12:38,193 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:12:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1364/7340 [46:20<203:02, 29.4 steps/min]\u001b[92m16:12:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:12:39,517 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:12:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:40,192 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 232})\n",
+ "2025-08-11 16:12:40,843 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1365/7340 [46:23<203:03, 29.4 steps/min]\u001b[92m16:12:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:42,204 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:43,551 - agent.ComputerAgent - INFO - Computer: click({'x': 835, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 835, 'y': 36})\n",
+ "2025-08-11 16:12:44,241 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:44,933 - agent.ComputerAgent - INFO - Computer: click({'x': 433, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 433, 'y': 635})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1367/7340 [46:26<202:56, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:45,580 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:46,215 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1369/7340 [46:28<202:40, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:46,933 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:12:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:47,614 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:12:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1369/7340 [46:29<202:46, 29.4 steps/min]2025-08-11 16:12:48,293 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:12:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:48,973 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1369/7340 [46:30<202:52, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:12:50,174 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:12:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1369/7340 [46:31<202:57, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:51,233 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:52,561 - agent.ComputerAgent - INFO - Computer: type({'text': 'file1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'file1'})\n",
+ " 19%|███████---------------------------------| 1369/7340 [46:34<203:07, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:12:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:12:53,913 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:12:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:12:55,675 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ " 19%|███████---------------------------------| 1371/7340 [46:37<202:59, 29.4 steps/min]\u001b[92m16:12:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:12:56,375 - agent.ComputerAgent - INFO - Computer: click({'x': 304, 'y': 735})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 304, 'y': 735})\n",
+ "2025-08-11 16:12:57,004 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:12:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1371/7340 [46:38<203:05, 29.4 steps/min]2025-08-11 16:12:57,682 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:12:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1372/7340 [46:39<202:58, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:00,048 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1372/7340 [46:41<203:07, 29.4 steps/min]2025-08-11 16:13:00,705 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:13:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:13:01,364 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1372/7340 [46:45<203:22, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1374/7340 [46:46<203:05, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:05,255 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:13:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:13:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa4f593f-4977-4dc4-9238-0a67602a0900/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:06,551 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 16:13:07,203 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 97})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 97})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1376/7340 [46:49<202:59, 29.4 steps/min]2025-08-11 16:13:08,546 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:13:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1376/7340 [46:51<203:04, 29.4 steps/min]2025-08-11 16:13:09,882 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1376/7340 [46:52<203:08, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:13:13,459 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:14,815 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]2025-08-11 16:13:16,129 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]\u001b[92m16:13:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ "2025-08-11 16:13:17,549 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 19%|███████---------------------------------| 1377/7340 [46:59<203:28, 29.3 steps/min]\u001b[92m16:13:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:19,673 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:13:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 19%|███████---------------------------------| 1379/7340 [47:02<203:18, 29.3 steps/min]\u001b[92m16:13:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:20,955 - agent.ComputerAgent - INFO - Computer: click({'x': 579, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 579, 'y': 356})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:21,592 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 346})\n",
+ "\u001b[92m16:13:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:22,639 - agent.ComputerAgent - INFO - Computer: double_click({'x': 503, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 503, 'y': 284})\n",
+ " 19%|███████---------------------------------| 1379/7340 [47:04<203:28, 29.3 steps/min]2025-08-11 16:13:23,308 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 258})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 258})\n",
+ "2025-08-11 16:13:23,955 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 386})\n",
+ "\u001b[92m16:13:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:24,652 - agent.ComputerAgent - INFO - Computer: click({'x': 323, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 323, 'y': 94})\n",
+ " 19%|███████---------------------------------| 1382/7340 [47:06<203:05, 29.3 steps/min]2025-08-11 16:13:25,285 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:13:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:25,964 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:13:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1385/7340 [47:08<202:42, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1385/7340 [47:09<202:47, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:29,362 - agent.ComputerAgent - INFO - Computer: click({'x': 705, 'y': 348})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 705, 'y': 348})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 19%|███████---------------------------------| 1386/7340 [47:11<202:41, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1387/7340 [47:12<202:35, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:31,057 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:13:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:31,735 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:13:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1387/7340 [47:14<202:44, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ef139911-784a-4229-9f23-51d74cde7d59/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:33,074 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:33,733 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:13:34,420 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 213})\n",
+ "2025-08-11 16:13:35,095 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:13:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:13:35,771 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:13:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1388/7340 [47:17<202:48, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:37,475 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:13:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:13:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1389/7340 [47:19<202:47, 29.3 steps/min]\u001b[92m16:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:38,822 - agent.ComputerAgent - INFO - Computer: click({'x': 89, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 89, 'y': 10})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:39,464 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1389/7340 [47:21<202:52, 29.3 steps/min]2025-08-11 16:13:40,444 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 290})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 290})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:41,139 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:13:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1390/7340 [47:22<202:49, 29.3 steps/min]2025-08-11 16:13:41,794 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:13:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:13:42,466 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:13:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1391/7340 [47:24<202:44, 29.3 steps/min]2025-08-11 16:13:43,156 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1391/7340 [47:25<202:49, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:13:44,485 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:13:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:45,832 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 738})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 738})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 19%|███████---------------------------------| 1392/7340 [47:27<202:47, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:13:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:13:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:47,532 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 75, 'y': 167}, {'x': 229, 'y': 307}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 75, 'y': 167}, {'x': 229, 'y': 307}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1393/7340 [47:29<202:44, 29.3 steps/min]2025-08-11 16:13:48,208 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:48,872 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:50,223 - agent.ComputerAgent - INFO - Computer: type({'text': 'bash -lc \\'cd ~/Desktop && convert \"tilearray.png\" -crop 3x1@ +repage +adjoin slice_%d.png && convert \\\\( slice_0.png -fill \"#ffb380\" -colorize 8 \\\\) \\\\( slice_1.png -fill \"#ff9a4d\" -colorize 18 \\\\) \\\\( slice_2.png -fill \"#ff7f2a\" -colorize 28 \\\\) +append rearranged.png && ls -1 slice_*.png rearranged.png\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'bash -lc \\'cd ~/Desktop && convert \"tilearray.png\" -crop 3x1@ +repage +adjoin slice_%d.png && convert \\\\( slice_0.png -fill \"#ffb380\" -colorize 8 \\\\) \\\\( slice_1.png -fill \"#ff9a4d\" -colorize 18 \\\\) \\\\( slice_2.png -fill \"#ff7f2a\" -colorize 28 \\\\) +append rearranged.png && ls -1 slice_*.png rearranged.png\\''})\n",
+ " 19%|███████---------------------------------| 1394/7340 [47:31<202:44, 29.3 steps/min]2025-08-11 16:13:50,903 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:13:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1395/7340 [47:32<202:38, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:13:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:52,758 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:13:52,760 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 649})\n",
+ " 19%|███████---------------------------------| 1395/7340 [47:34<202:44, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:53,965 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:13:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1396/7340 [47:35<202:39, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1396/7340 [47:36<202:43, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:56,322 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:13:57,601 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1396/7340 [47:39<202:57, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:13:59,528 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:13:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/reset \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1400/7340 [47:41<202:20, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:00,175 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:14:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:14:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:02,860 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:14:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:03,502 - agent.ComputerAgent - INFO - Computer: click({'x': 250, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 250, 'y': 339})\n",
+ "\u001b[92m16:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1400/7340 [47:45<202:36, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:04,120 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:14:04,122 - agent.ComputerAgent - INFO - Computer: double_click({'x': 987, 'y': 398})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 987, 'y': 398})\n",
+ "\u001b[92m16:14:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:04,793 - agent.ComputerAgent - INFO - Computer: click({'x': 990, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 990, 'y': 732})\n",
+ "\u001b[92m16:14:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1401/7340 [47:46<202:31, 29.3 steps/min]2025-08-11 16:14:05,457 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 378})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 378})\n",
+ "\u001b[92m16:14:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:06,722 - agent.ComputerAgent - INFO - Computer: click({'x': 334, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 334, 'y': 94})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0cad7a26-2224-4401-9a66-57daca76d380/close \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1403/7340 [47:48<202:18, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1405/7340 [47:49<202:02, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:08,754 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:09,433 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 69})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 69})\n",
+ "\u001b[92m16:14:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1405/7340 [47:51<202:11, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:10,802 - agent.ComputerAgent - INFO - Computer: double_click({'x': 118, 'y': 740})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 118, 'y': 740})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1406/7340 [47:52<202:04, 29.4 steps/min]2025-08-11 16:14:11,956 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:14:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1407/7340 [47:53<201:58, 29.4 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 19%|███████---------------------------------| 1407/7340 [47:55<202:07, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]2025-08-11 16:14:15,392 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:14:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 16:14:16,714 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1407/7340 [47:58<202:17, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 16:14:18,052 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 16:14:19,442 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo \"Target tree after:\"; find ~/Desktop/dir3 -type d | sed \\'s|.*/Desktop/||\\'; echo \"Files in target (should be none from source copy):\"; find ~/Desktop/dir3 -type f -maxdepth 3 | sed \\'s|.*/Desktop/||\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo \"Target tree after:\"; find ~/Desktop/dir3 -type d | sed \\'s|.*/Desktop/||\\'; echo \"Files in target (should be none from source copy):\"; find ~/Desktop/dir3 -type f -maxdepth 3 | sed \\'s|.*/Desktop/||\\''})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1408/7340 [48:01<202:18, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:20,786 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:22,093 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 19%|███████---------------------------------| 1409/7340 [48:03<202:19, 29.3 steps/min]2025-08-11 16:14:22,786 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 205})\n",
+ "\u001b[92m16:14:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:23,432 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:14:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:24,088 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:14:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:24,798 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 176})\n",
+ "2025-08-11 16:14:25,412 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:14:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1409/7340 [48:07<202:33, 29.3 steps/min]2025-08-11 16:14:26,595 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:14:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 19%|███████---------------------------------| 1411/7340 [48:08<202:17, 29.3 steps/min]2025-08-11 16:14:27,272 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:14:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:27,920 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:14:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:14:29,692 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 19%|███████---------------------------------| 1411/7340 [48:11<202:29, 29.3 steps/min]2025-08-11 16:14:30,356 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:14:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1412/7340 [48:12<202:23, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:14:31,535 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:14:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1412/7340 [48:13<202:28, 29.3 steps/min]2025-08-11 16:14:32,608 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:14:33,641 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:14:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1412/7340 [48:16<202:38, 29.3 steps/min]\u001b[92m16:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:36,357 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo find . -type f ! -perm 0644 -exec chmod 644 {} +'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo find . -type f ! -perm 0644 -exec chmod 644 {} +'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1412/7340 [48:18<202:47, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:37,027 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 158})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:38,349 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:14:38,350 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 190})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 19%|███████---------------------------------| 1413/7340 [48:20<202:47, 29.2 steps/min]\u001b[92m16:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:39,638 - agent.ComputerAgent - INFO - Computer: click({'x': 339, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 339, 'y': 142})\n",
+ "\u001b[92m16:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:40,336 - agent.ComputerAgent - INFO - Computer: click({'x': 426, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 426, 'y': 249})\n",
+ " 19%|███████---------------------------------| 1415/7340 [48:22<202:31, 29.3 steps/min]2025-08-11 16:14:40,967 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:14:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:14:42,286 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1417/7340 [48:24<202:21, 29.3 steps/min]\u001b[92m16:14:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:45,324 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:14:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:14:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1417/7340 [48:27<202:31, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:46,008 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ "\u001b[92m16:14:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:14:47,056 - agent.ComputerAgent - INFO - Computer: click({'x': 334, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 334, 'y': 94})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:14:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1417/7340 [48:29<202:41, 29.2 steps/min]2025-08-11 16:14:48,427 - agent.ComputerAgent - INFO - Computer: click({'x': 602, 'y': 311})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 602, 'y': 311})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:14:49,078 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:14:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:14:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1419/7340 [48:30<202:25, 29.2 steps/min]2025-08-11 16:14:50,552 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "2025-08-11 16:14:51,230 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:14:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1420/7340 [48:32<202:24, 29.2 steps/min]2025-08-11 16:14:51,884 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:14:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cc2e38be-6768-4928-bfe5-d7f31cb68b24/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1421/7340 [48:34<202:19, 29.3 steps/min]2025-08-11 16:14:53,727 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:14:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:14:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/39724bde-60dd-471d-ba25-1ac9b1405c76/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:36<202:16, 29.3 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:14:55,846 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:14:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:37<202:22, 29.2 steps/min]2025-08-11 16:14:56,520 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:14:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:38<202:27, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.93s/it]2025-08-11 16:14:57,786 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:14:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:14:58,662 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.71s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:15:00,260 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.66s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:41<202:40, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]\n",
+ "\u001b[92m16:15:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:01,756 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:15:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:44<202:50, 29.2 steps/min]\u001b[92m16:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:03,256 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.89s/it]2025-08-11 16:15:04,741 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.71s/it]\u001b[92m16:15:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "2025-08-11 16:15:08,031 - agent.ComputerAgent - INFO - Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -l slice_*.png rearranged.png'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"bash -lc 'cd ~/Desktop && ls -l slice_*.png rearranged.png'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 19%|███████---------------------------------| 1422/7340 [48:50<203:16, 29.1 steps/min]\u001b[92m16:15:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:09,538 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:10,844 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 275})\n",
+ "\u001b[92m16:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1423/7340 [48:52<203:13, 29.1 steps/min]\u001b[92m16:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:11,517 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:12,850 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 676, 'scroll_x': 0, 'x': 499, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 676, 'scroll_x': 0, 'x': 499, 'y': 392})\n",
+ "\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 19%|███████---------------------------------| 1424/7340 [48:54<203:11, 29.1 steps/min]\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:13,511 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:15:13,513 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 367})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 367})\n",
+ "2025-08-11 16:15:14,191 - agent.ComputerAgent - INFO - Computer: click({'x': 928, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 928, 'y': 230})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:15,545 - agent.ComputerAgent - INFO - Computer: click({'x': 118, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 118, 'y': 737})\n",
+ "\u001b[92m16:15:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:16,921 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 19%|███████---------------------------------| 1426/7340 [48:58<203:07, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:17,611 - agent.ComputerAgent - INFO - Computer: click({'x': 195, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 195, 'y': 339})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:18,959 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:15:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:15:19,677 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 258, 'y': 257}, {'x': 605, 'y': 259}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 258, 'y': 257}, {'x': 605, 'y': 259}]})\n",
+ " 19%|███████---------------------------------| 1430/7340 [49:01<202:36, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:20,376 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1432/7340 [49:02<202:21, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:22,174 - agent.ComputerAgent - INFO - Computer: click({'x': 654, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 654, 'y': 35})\n",
+ "\u001b[92m16:15:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1433/7340 [49:03<202:15, 29.2 steps/min]2025-08-11 16:15:22,852 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 278})\n",
+ " 20%|███████---------------------------------| 1435/7340 [49:05<202:02, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1435/7340 [49:07<202:08, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:15:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:26,729 - agent.ComputerAgent - INFO - Computer: click({'x': 538, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 538, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1435/7340 [49:08<202:12, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1436/7340 [49:10<202:09, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:29,198 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:15:30,218 - agent.ComputerAgent - INFO - Computer: double_click({'x': 288, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 288, 'y': 101})\n",
+ "2025-08-11 16:15:30,879 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:15:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:31,527 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:15:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1436/7340 [49:13<202:22, 29.2 steps/min]2025-08-11 16:15:32,173 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:15:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:15:33,230 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 564})\n",
+ "2025-08-11 16:15:33,899 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:15:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1437/7340 [49:15<202:21, 29.2 steps/min]2025-08-11 16:15:34,616 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:15:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:15:35,280 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:15:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:15:35,955 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:15:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:15:36,619 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:15:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1438/7340 [49:18<202:22, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1438/7340 [49:19<202:27, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:39,362 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:15:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:15:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1438/7340 [49:21<202:36, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:40,719 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 324, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 324, 'y': 94})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:41,383 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:15:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1438/7340 [49:23<202:42, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:42,080 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 525, 'y': 709})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 525, 'y': 709})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:15:43,107 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:15:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1439/7340 [49:25<202:38, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:47,000 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:15:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1440/7340 [49:29<202:46, 29.1 steps/min]\u001b[92m16:15:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:15:48,341 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:48,992 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 10})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:15:50,399 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ " 20%|███████---------------------------------| 1440/7340 [49:32<202:57, 29.1 steps/min]2025-08-11 16:15:51,140 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 178})\n",
+ "2025-08-11 16:15:51,834 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:15:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1441/7340 [49:33<202:52, 29.1 steps/min]2025-08-11 16:15:52,519 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1442/7340 [49:34<202:47, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:15:54,435 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/79295f2f-2987-488c-b4b7-c968f71c7597/close \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1442/7340 [49:36<202:52, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:15:57,500 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:15:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1443/7340 [49:39<202:55, 29.1 steps/min]2025-08-11 16:15:58,139 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:15:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1443/7340 [49:41<203:04, 29.0 steps/min]2025-08-11 16:16:00,336 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:16:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1443/7340 [49:42<203:09, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:01,736 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:16:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1443/7340 [49:44<203:15, 29.0 steps/min]2025-08-11 16:16:03,068 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:16:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1443/7340 [49:45<203:19, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]2025-08-11 16:16:05,120 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 16:16:07,675 - agent.ComputerAgent - INFO - Agent: The RAW image has been converted and saved as yicun.jpg on your Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The RAW image has been converted and saved as yicun.jpg on your Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:16:08,376 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 156\n",
+ " - prompt_tokens: 8051\n",
+ " - total_tokens: 8207\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0116\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 156\n",
+ " - prompt_tokens: 8051\n",
+ " - total_tokens: 8207\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0116\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|███████---------------------------------| 1445/7340 [49:51<203:23, 29.0 steps/min]\u001b[92m16:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:16:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:11,032 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 142})\n",
+ "\u001b[92m16:16:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1445/7340 [49:52<203:29, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:11,705 - agent.ComputerAgent - INFO - Computer: double_click({'x': 203, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 203, 'y': 105})\n",
+ "2025-08-11 16:16:12,375 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 75})\n",
+ "2025-08-11 16:16:13,056 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 36})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:14,391 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+pageup'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+pageup'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:15,053 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 645, 'scroll_x': 0, 'x': 553, 'y': 720})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 645, 'scroll_x': 0, 'x': 553, 'y': 720})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1446/7340 [49:57<203:37, 28.9 steps/min]\u001b[92m16:16:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:16,325 - agent.ComputerAgent - INFO - Computer: click({'x': 755, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 755, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:17,688 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1450/7340 [50:00<203:09, 29.0 steps/min]\u001b[92m16:16:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:19,687 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 53})\n",
+ "\u001b[92m16:16:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:20,365 - agent.ComputerAgent - INFO - Computer: click({'x': 841, 'y': 515})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 841, 'y': 515})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1451/7340 [50:03<203:09, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:22,331 - agent.ComputerAgent - INFO - Computer: click({'x': 425, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 425, 'y': 249})\n",
+ "2025-08-11 16:16:23,004 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 713})\n",
+ "\u001b[92m16:16:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|███████---------------------------------| 1453/7340 [50:04<202:54, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:23,677 - agent.ComputerAgent - INFO - Computer: click({'x': 104, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 104, 'y': 179})\n",
+ "\u001b[92m16:16:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:24,380 - agent.ComputerAgent - INFO - Computer: click({'x': 375, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 375, 'y': 142})\n",
+ " 20%|███████---------------------------------| 1455/7340 [50:06<202:38, 29.0 steps/min]2025-08-11 16:16:25,028 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:16:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1457/7340 [50:07<202:21, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:08<200:42, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2656d0e-a6f4-4ecb-a099-cfe8471c4998/close \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:09<200:47, 29.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:10<200:51, 29.2 steps/min]2025-08-11 16:16:29,278 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:16:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:16:29,970 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:16:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:11<200:57, 29.2 steps/min]2025-08-11 16:16:30,947 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:16:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:16:31,607 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:16:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:16:32,269 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:16:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:16:32,941 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:16:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:14<201:09, 29.2 steps/min]2025-08-11 16:16:33,620 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:16:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:16<201:17, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:16:36,506 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+down'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:18<201:26, 29.2 steps/min]\u001b[92m16:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:38,243 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:16:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|███████---------------------------------| 1467/7340 [50:19<201:30, 29.1 steps/min]2025-08-11 16:16:38,927 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:16:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:16:39,618 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:16:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:16:40,330 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.70s/it]\u001b[92m16:16:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]29.1 steps/min]2025-08-11 16:16:42,499 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:16:43,863 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]29.1 steps/min]\n",
+ " 20%|████████--------------------------------| 1468/7340 [50:27<201:50, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:16:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1468/7340 [50:28<201:54, 29.1 steps/min]\u001b[92m16:16:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:47,448 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 737})\n",
+ "\u001b[92m16:16:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:48,048 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 335})\n",
+ " 20%|████████--------------------------------| 1468/7340 [50:29<201:59, 29.1 steps/min]\u001b[92m16:16:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:48,725 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 525, 'y': 709})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 525, 'y': 709})\n",
+ " 20%|████████--------------------------------| 1470/7340 [50:30<201:42, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1471/7340 [50:31<201:36, 29.1 steps/min]2025-08-11 16:16:50,428 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:16:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1471/7340 [50:32<201:40, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1471/7340 [50:33<201:44, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:54,141 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -661, 'x': 988, 'y': 569})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -661, 'x': 988, 'y': 569})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1472/7340 [50:35<201:42, 29.1 steps/min]\u001b[92m16:16:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:16:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:55,500 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 139})\n",
+ "\u001b[92m16:16:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1472/7340 [50:37<201:47, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:56,145 - agent.ComputerAgent - INFO - Computer: click({'x': 396, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 396, 'y': 562})\n",
+ "2025-08-11 16:16:56,770 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:16:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:16:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:16:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1473/7340 [50:39<201:45, 29.1 steps/min]2025-08-11 16:16:58,080 - agent.ComputerAgent - INFO - Computer: click({'x': 755, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 755, 'y': 415})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:16:58,768 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:16:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:16:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1474/7340 [50:40<201:40, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:16:59,403 - agent.ComputerAgent - INFO - Computer: click({'x': 221, 'y': 196})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 221, 'y': 196})\n",
+ "2025-08-11 16:17:00,055 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:17:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1475/7340 [50:42<201:37, 29.1 steps/min]\u001b[92m16:17:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:17:01,448 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:17:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:17:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:02,151 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 256})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 256})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1476/7340 [50:44<201:35, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:17:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:04,009 - agent.ComputerAgent - INFO - Computer: click({'x': 629, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 629, 'y': 103})\n",
+ " 20%|████████--------------------------------| 1477/7340 [50:45<201:30, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a23ddde7-5509-407d-af64-ea09807c1af1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:05,290 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:17:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1478/7340 [50:47<201:25, 29.1 steps/min]2025-08-11 16:17:05,979 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:17:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:17:06,660 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:17:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1479/7340 [50:48<201:20, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1479/7340 [50:49<201:24, 29.1 steps/min]2025-08-11 16:17:08,328 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:17:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:09,009 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1479/7340 [50:50<201:29, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:17:10,342 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:17:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1479/7340 [50:52<201:37, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.83s/it]\u001b[92m16:17:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1479/7340 [50:53<201:41, 29.1 steps/min]2025-08-11 16:17:12,492 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:17:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.73s/it]29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 20%|████████--------------------------------| 1480/7340 [50:55<201:39, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.65s/it]\u001b[92m16:17:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1480/7340 [50:56<201:43, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]\n",
+ " 20%|████████--------------------------------| 1480/7340 [50:57<201:47, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:16,779 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:17:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1480/7340 [50:58<201:51, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:17:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:18,599 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 158})\n",
+ " 20%|████████--------------------------------| 1480/7340 [51:00<201:57, 29.0 steps/min]\u001b[92m16:17:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:19,298 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:20,636 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "2025-08-11 16:17:21,319 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 642, 'scroll_x': 0, 'x': 553, 'y': 720})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 642, 'scroll_x': 0, 'x': 553, 'y': 720})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|████████--------------------------------| 1482/7340 [51:03<201:50, 29.0 steps/min]\u001b[92m16:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:22,666 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 275})\n",
+ "\u001b[92m16:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:23,964 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ " 20%|████████--------------------------------| 1485/7340 [51:05<201:27, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:17:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:25,154 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 189})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1487/7340 [51:06<201:11, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:25,781 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1488/7340 [51:09<201:13, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:29,042 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:17:29,739 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1489/7340 [51:11<201:09, 29.1 steps/min]2025-08-11 16:17:30,400 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:17:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:17:31,096 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1489/7340 [51:12<201:15, 29.1 steps/min]2025-08-11 16:17:31,750 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:17:32,438 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:17:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1489/7340 [51:14<201:22, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:33,738 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:17:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:17:34,427 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1489/7340 [51:16<201:30, 29.0 steps/min]\u001b[92m16:17:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:35,806 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 630, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 630, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|████████--------------------------------| 1490/7340 [51:17<201:24, 29.0 steps/min]\u001b[92m16:17:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:36,961 - agent.ComputerAgent - INFO - Computer: click({'x': 607, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 607, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 20%|████████--------------------------------| 1491/7340 [51:19<201:19, 29.1 steps/min]2025-08-11 16:17:38,251 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:17:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|████████--------------------------------| 1492/7340 [51:20<201:13, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:39,652 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo chmod 644 ./cache/trackrs/files/last-crawl.txt && find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo chmod 644 ./cache/trackrs/files/last-crawl.txt && find . -type f ! -perm 0644 -ls | head -n 20'})\n",
+ " 20%|████████--------------------------------| 1492/7340 [51:21<201:17, 29.1 steps/min]\u001b[92m16:17:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:40,291 - agent.ComputerAgent - INFO - Computer: click({'x': 683, 'y': 516})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 683, 'y': 516})\n",
+ " 20%|████████--------------------------------| 1493/7340 [51:22<201:11, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|████████--------------------------------| 1494/7340 [51:23<201:05, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:42,665 - agent.ComputerAgent - INFO - Computer: click({'x': 984, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 984, 'y': 731})\n",
+ " 20%|████████--------------------------------| 1495/7340 [51:24<200:59, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:43,281 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:17:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1496/7340 [51:25<200:52, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 20%|████████--------------------------------| 1496/7340 [51:26<200:56, 29.1 steps/min]2025-08-11 16:17:45,112 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:45,827 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "2025-08-11 16:17:46,489 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:17:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:17:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1497/7340 [51:28<200:56, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:17:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:17:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:49,213 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 259})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 259})\n",
+ "2025-08-11 16:17:49,840 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ " 20%|████████--------------------------------| 1498/7340 [51:31<200:56, 29.1 steps/min]\u001b[92m16:17:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:17:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:51,535 - agent.ComputerAgent - INFO - Computer: double_click({'x': 203, 'y': 114})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 203, 'y': 114})\n",
+ "\u001b[92m16:17:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1499/7340 [51:33<200:53, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:52,220 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 162})\n",
+ "\u001b[92m16:17:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:17:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:53,567 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 516, 'scroll_x': 0, 'x': 54, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 516, 'scroll_x': 0, 'x': 54, 'y': 750})\n",
+ " 20%|████████--------------------------------| 1500/7340 [51:35<200:51, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:17:54,201 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:17:55,275 - agent.ComputerAgent - INFO - Computer: click({'x': 938, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 938, 'y': 190})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1502/7340 [51:37<200:37, 29.1 steps/min]2025-08-11 16:17:55,940 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:17:56,582 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:17:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1503/7340 [51:39<200:37, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:17:58,795 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 20%|████████--------------------------------| 1503/7340 [51:40<200:41, 29.1 steps/min]2025-08-11 16:17:59,463 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:17:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:18:00,131 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:18:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1503/7340 [51:41<200:46, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 20%|████████--------------------------------| 1504/7340 [51:43<200:44, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:02,811 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:18:03,461 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:18:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1504/7340 [51:45<200:49, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:05,133 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:18:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1504/7340 [51:46<200:55, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 20%|████████--------------------------------| 1504/7340 [51:47<200:59, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1505/7340 [51:49<200:54, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:08,631 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:18:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:09,328 - agent.ComputerAgent - INFO - Computer: click({'x': 213, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 213, 'y': 125})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3f7029e-7bbd-43fb-bea4-c66cc9ae685d/close \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1507/7340 [51:51<200:44, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:11,309 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 564})\n",
+ "\u001b[92m16:18:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1508/7340 [51:53<200:39, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:11,965 - agent.ComputerAgent - INFO - Computer: click({'x': 727, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 727, 'y': 164})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1509/7340 [51:54<200:34, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 21%|████████--------------------------------| 1511/7340 [51:55<200:18, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]2025-08-11 16:18:15,299 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:18:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1511/7340 [51:57<200:24, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:17,737 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:18:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1511/7340 [51:59<200:34, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 16:18:18,455 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:18:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1511/7340 [52:01<200:40, 29.0 steps/min]2025-08-11 16:18:20,019 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:18:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 21%|████████--------------------------------| 1511/7340 [52:02<200:43, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:21,163 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:18:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:18:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:21,827 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 74})\n",
+ " 21%|████████--------------------------------| 1512/7340 [52:03<200:39, 29.0 steps/min]\u001b[92m16:18:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:22,451 - agent.ComputerAgent - INFO - Computer: move({'x': 525, 'y': 692})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 525, 'y': 692})\n",
+ "\u001b[92m16:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:23,480 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 302})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 302})\n",
+ "2025-08-11 16:18:24,150 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 287})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1513/7340 [52:07<200:43, 29.0 steps/min]\u001b[92m16:18:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:27,400 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo find . -type f ! -perm 0644 -print -quit; echo DONE'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo find . -type f ! -perm 0644 -print -quit; echo DONE'})\n",
+ "2025-08-11 16:18:28,054 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:18:28,055 - agent.ComputerAgent - INFO - Computer: click({'x': 729, 'y': 106})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 729, 'y': 106})\n",
+ "\u001b[92m16:18:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1516/7340 [52:09<200:23, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:29,452 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 426})\n",
+ "2025-08-11 16:18:30,134 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1518/7340 [52:11<200:11, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:18:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:31,312 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 190})\n",
+ " 21%|████████--------------------------------| 1521/7340 [52:15<199:54, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:16<199:08, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25b11573-c320-4222-b3e4-5c23cec1ab43/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:35,741 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:18:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:17<199:13, 29.2 steps/min]2025-08-11 16:18:36,395 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:18:37,073 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:18:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:18<199:19, 29.2 steps/min]2025-08-11 16:18:38,118 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:20<199:25, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:39,473 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:18:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:40,162 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:18:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:18:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:22<199:33, 29.1 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:18:41,482 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:18:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:18:42,132 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:23<199:38, 29.1 steps/min]\u001b[92m16:18:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it]29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:46,302 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.65s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]29.1 steps/min]\n",
+ "2025-08-11 16:18:47,686 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:18:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:30<200:02, 29.1 steps/min]\u001b[92m16:18:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:49,841 - agent.ComputerAgent - INFO - Computer: type({'text': 'CharlieCard Store appointment Transportation Access Pass'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'CharlieCard Store appointment Transportation Access Pass'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 21%|████████--------------------------------| 1526/7340 [52:32<200:09, 29.0 steps/min]\u001b[92m16:18:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:51,245 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 735})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 735})\n",
+ "2025-08-11 16:18:51,903 - agent.ComputerAgent - INFO - Computer: double_click({'x': 375, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 375, 'y': 81})\n",
+ "\u001b[92m16:18:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:18:52,531 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 649, 'scroll_x': 0, 'x': 502, 'y': 698})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 649, 'scroll_x': 0, 'x': 502, 'y': 698})\n",
+ " 21%|████████--------------------------------| 1527/7340 [52:34<200:07, 29.0 steps/min]\u001b[92m16:18:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:18:53,193 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 491, 'x': 518, 'y': 692})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 491, 'x': 518, 'y': 692})\n",
+ " 21%|████████--------------------------------| 1531/7340 [52:35<199:31, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1531/7340 [52:38<199:43, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1531/7340 [52:39<199:47, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:18:58,444 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:18:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89cdf329-a61d-4d69-9c6c-5d0ea35677b6/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1535/7340 [52:41<199:16, 29.1 steps/min]\u001b[92m16:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1535/7340 [52:42<199:20, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:01,760 - agent.ComputerAgent - INFO - Computer: click({'x': 732, 'y': 603})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 732, 'y': 603})\n",
+ "2025-08-11 16:19:02,441 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "2025-08-11 16:19:03,089 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:19:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:19:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:04,435 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 21%|████████--------------------------------| 1535/7340 [52:46<199:33, 29.1 steps/min]2025-08-11 16:19:05,091 - agent.ComputerAgent - INFO - Computer: double_click({'x': 182, 'y': 170})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 182, 'y': 170})\n",
+ "2025-08-11 16:19:05,720 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:19:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1538/7340 [52:47<199:09, 29.1 steps/min]2025-08-11 16:19:06,373 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:19:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fafe8f9a-bc46-42ad-b3ca-7190a64ab552/close \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1539/7340 [52:48<199:04, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1539/7340 [52:49<199:08, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:10,576 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1539/7340 [52:52<199:17, 29.1 steps/min]2025-08-11 16:19:11,235 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:19:12,267 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1540/7340 [52:54<199:14, 29.1 steps/min]2025-08-11 16:19:12,923 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:19:13,596 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:19:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]29.1 steps/min]\n",
+ " 21%|████████--------------------------------| 1540/7340 [52:59<199:34, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1540/7340 [53:00<199:38, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:20,196 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1540/7340 [53:02<199:44, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:20,919 - agent.ComputerAgent - INFO - Computer: click({'x': 849, 'y': 352})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 849, 'y': 352})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:22,751 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 562})\n",
+ " 21%|████████--------------------------------| 1540/7340 [53:04<199:53, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:24,084 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1542/7340 [53:06<199:41, 29.0 steps/min]2025-08-11 16:19:25,348 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:19:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:26,025 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1542/7340 [53:08<199:48, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:27,696 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 526, 'y': 709})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 526, 'y': 709})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1543/7340 [53:09<199:42, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:28,838 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 74})\n",
+ " 21%|████████--------------------------------| 1544/7340 [53:10<199:37, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1545/7340 [53:11<199:30, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1545/7340 [53:12<199:34, 29.0 steps/min]\u001b[92m16:19:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:31,660 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 370})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 370})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/reset \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1545/7340 [53:13<199:38, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:33,329 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:19:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1546/7340 [53:15<199:34, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:33,964 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:19:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:34,622 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:19:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1546/7340 [53:16<199:39, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:35,268 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:19:35,915 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:19:36,595 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:19:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1546/7340 [53:18<199:46, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1546/7340 [53:19<199:50, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:38,944 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 131})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1546/7340 [53:21<199:57, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:40,904 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:41,616 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 737})\n",
+ " 21%|████████--------------------------------| 1547/7340 [53:23<199:55, 29.0 steps/min]\u001b[92m16:19:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:42,234 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:19:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:19:42,916 - agent.ComputerAgent - INFO - Computer: click({'x': 485, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 485, 'y': 275})\n",
+ " 21%|████████--------------------------------| 1549/7340 [53:25<199:44, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 21%|████████--------------------------------| 1549/7340 [53:26<199:48, 29.0 steps/min]\u001b[92m16:19:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:45,304 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:19:45,304 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1550/7340 [53:27<199:42, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:19:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:47,148 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 313, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 313, 'y': 81})\n",
+ " 21%|████████--------------------------------| 1550/7340 [53:28<199:46, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:48,304 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:19:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1551/7340 [53:30<199:41, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:49,505 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:19:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1551/7340 [53:31<199:45, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:50,674 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:19:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1551/7340 [53:33<199:52, 29.0 steps/min]\u001b[92m16:19:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:19:52,476 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:19:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1551/7340 [53:34<199:59, 28.9 steps/min]\u001b[92m16:19:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:54,199 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 78})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 78})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1551/7340 [53:35<200:03, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:19:55,848 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:57,152 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:19:57,153 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+o'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+o'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1552/7340 [53:38<200:04, 28.9 steps/min]2025-08-11 16:19:57,823 - agent.ComputerAgent - INFO - Computer: click({'x': 712, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 712, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:19:59,105 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 21%|████████--------------------------------| 1553/7340 [53:41<200:04, 28.9 steps/min]\u001b[92m16:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:00,457 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "\u001b[92m16:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:01,139 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:20:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:20:01,785 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:20:01,786 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ " 21%|████████--------------------------------| 1554/7340 [53:43<200:02, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:20:03,179 - agent.ComputerAgent - INFO - Computer: type({'text': \"sudo find . -type f ! -perm 0644 -printf '%m %u:%g %p\\\\n' | head -n 20\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"sudo find . -type f ! -perm 0644 -printf '%m %u:%g %p\\\\n' | head -n 20\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80299c20-3bcf-48b1-a471-299a1eda0a00/close \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1556/7340 [53:44<199:47, 28.9 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:20:03,800 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:20:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:46<199:42, 29.0 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:47<199:46, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:48<199:50, 28.9 steps/min]2025-08-11 16:20:07,163 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:20:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:49<199:55, 28.9 steps/min]2025-08-11 16:20:08,475 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:20:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:50<199:59, 28.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:20:09,784 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:20:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:51<200:02, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:11,357 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]\u001b[92m16:20:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:53<200:10, 28.9 steps/min]\u001b[92m16:20:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:54<200:15, 28.9 steps/min]2025-08-11 16:20:14,317 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:20:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:56<200:19, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.67s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]\n",
+ "\u001b[92m16:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:57<200:23, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:58<200:28, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:18,054 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 286})\n",
+ " 21%|████████--------------------------------| 1557/7340 [53:59<200:33, 28.8 steps/min]\u001b[92m16:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:18,735 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 768})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 768})\n",
+ "\u001b[92m16:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:19,418 - agent.ComputerAgent - INFO - Computer: click({'x': 675, 'y': 509})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 675, 'y': 509})\n",
+ "2025-08-11 16:20:20,074 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1558/7340 [54:02<200:33, 28.8 steps/min]\u001b[92m16:20:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:21,438 - agent.ComputerAgent - INFO - Computer: click({'x': 354, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 354, 'y': 128})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1561/7340 [54:03<200:09, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:22,722 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 633, 'scroll_x': 0, 'x': 87, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 633, 'scroll_x': 0, 'x': 87, 'y': 750})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:24,015 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1562/7340 [54:06<200:08, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:20:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:25,893 - agent.ComputerAgent - INFO - Computer: click({'x': 158, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 158, 'y': 737})\n",
+ "\u001b[92m16:20:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1564/7340 [54:07<199:53, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:26,505 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 74})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1565/7340 [54:08<199:49, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:27,833 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:28,504 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:20:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1566/7340 [54:10<199:46, 28.9 steps/min]\u001b[92m16:20:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:30,206 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 420, 'y': 294}, {'x': 171, 'y': 137}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 420, 'y': 294}, {'x': 171, 'y': 137}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1566/7340 [54:11<199:50, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:30,846 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:20:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:20:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:20:31,508 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:20:31,509 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 219})\n",
+ " 21%|████████--------------------------------| 1567/7340 [54:13<199:45, 28.9 steps/min]2025-08-11 16:20:32,145 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:20:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:20:32,814 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:20:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:20:34,144 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 21%|████████--------------------------------| 1568/7340 [54:15<199:45, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:20:35,144 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:20:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:20:35,819 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:20:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1569/7340 [54:17<199:42, 28.9 steps/min]2025-08-11 16:20:36,485 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:20:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:20:37,145 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:20:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1569/7340 [54:18<199:46, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 21%|████████--------------------------------| 1569/7340 [54:20<199:54, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:20:40,320 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:20:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ca85c226-0c49-4084-b2bc-86bd540c8bce/close \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:22<199:29, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:23<199:35, 28.9 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:20:42,705 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:20:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:20:43,373 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:25<199:40, 28.9 steps/min]\u001b[92m16:20:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.89s/it]\u001b[92m16:20:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ffbf23fa-9bd6-4b26-befa-cb45d31fc4fa/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:27<199:49, 28.9 steps/min]\u001b[92m16:20:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:29<199:54, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.65s/it]28.8 steps/min]\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:32<200:06, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:20:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:33<200:10, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:34<200:16, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.84s/it]\n",
+ "\u001b[92m16:20:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:54,551 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ " 21%|████████--------------------------------| 1572/7340 [54:36<200:21, 28.8 steps/min]\u001b[92m16:20:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:55,264 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 34})\n",
+ " 21%|████████--------------------------------| 1573/7340 [54:37<200:15, 28.8 steps/min]\u001b[92m16:20:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:55,893 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:20:57,225 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:20:57,227 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m16:20:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:20:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1575/7340 [54:39<200:02, 28.8 steps/min]2025-08-11 16:20:57,861 - agent.ComputerAgent - INFO - Computer: click({'x': 555, 'y': 528})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 555, 'y': 528})\n",
+ "2025-08-11 16:20:58,559 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 659})\n",
+ "2025-08-11 16:20:59,205 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:20:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 21%|████████--------------------------------| 1576/7340 [54:42<200:04, 28.8 steps/min]2025-08-11 16:21:01,199 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:21:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1579/7340 [54:43<199:40, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:02,441 - agent.ComputerAgent - INFO - Computer: click({'x': 675, 'y': 509})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 675, 'y': 509})\n",
+ "\u001b[92m16:21:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:03,133 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:21:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:21:03,809 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 75})\n",
+ "\u001b[92m16:21:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9dca7e41-642b-4cca-8758-834cef0e844c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1580/7340 [54:45<199:37, 28.9 steps/min]2025-08-11 16:21:04,440 - agent.ComputerAgent - INFO - Computer: click({'x': 51, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 51, 'y': 52})\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:49<199:23, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:09,144 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:21:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:50<199:28, 28.9 steps/min]2025-08-11 16:21:09,809 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:21:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:10,496 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:21:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:52<199:33, 28.8 steps/min]2025-08-11 16:21:11,167 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:21:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:21:11,846 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:21:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:53<199:38, 28.8 steps/min]2025-08-11 16:21:12,516 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:21:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:21:13,155 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:21:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:54<199:42, 28.8 steps/min]2025-08-11 16:21:13,812 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:21:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:21:14,451 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:21:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:56<199:47, 28.8 steps/min]2025-08-11 16:21:15,890 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:21:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:57<199:53, 28.8 steps/min]2025-08-11 16:21:17,024 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:21:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1583/7340 [54:59<200:00, 28.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:19,261 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:21:19,263 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 653})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 653})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1583/7340 [55:01<200:07, 28.8 steps/min]\u001b[92m16:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:21,899 - agent.ComputerAgent - INFO - Computer: type({'text': \"chmod 644 ./cache/trackers3/files/last-crawl.txt && sudo find . -type f ! -perm 0644 -printf '%m %u:%g %p\\\\n' | head -n 20\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"chmod 644 ./cache/trackers3/files/last-crawl.txt && sudo find . -type f ! -perm 0644 -printf '%m %u:%g %p\\\\n' | head -n 20\"})\n",
+ "\u001b[92m16:21:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:23,167 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 22%|████████--------------------------------| 1584/7340 [55:04<200:09, 28.8 steps/min]2025-08-11 16:21:23,804 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 53})\n",
+ "2025-08-11 16:21:24,462 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:21:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1585/7340 [55:06<200:04, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:25,603 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:21:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1586/7340 [55:07<199:58, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1586/7340 [55:08<200:02, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1586/7340 [55:10<200:10, 28.7 steps/min]\u001b[92m16:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:29,335 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 382})\n",
+ "\u001b[92m16:21:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:30,024 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 36})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1586/7340 [55:13<200:19, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:32,019 - agent.ComputerAgent - INFO - Computer: double_click({'x': 974, 'y': 666})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 974, 'y': 666})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:32,665 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:21:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:21:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1588/7340 [55:14<200:05, 28.7 steps/min]2025-08-11 16:21:33,325 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 234})\n",
+ "\u001b[92m16:21:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:33,960 - agent.ComputerAgent - INFO - Computer: double_click({'x': 675, 'y': 509})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 675, 'y': 509})\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:16<199:44, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:36,122 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:21:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:17<199:48, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:18<199:52, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3b3e7fbd-8c02-45a6-bb3d-83c056398d3f/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:39,067 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:20<199:59, 28.7 steps/min]2025-08-11 16:21:39,712 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:21:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:21:40,351 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:21:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:22<200:06, 28.7 steps/min]\u001b[92m16:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:41,654 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "2025-08-11 16:21:42,368 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:21:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]\u001b[92m16:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:26<200:18, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 16:21:46,248 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:21:46,250 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]\u001b[92m16:21:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1591/7340 [55:29<200:30, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:48,864 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:21:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1592/7340 [55:30<200:25, 28.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:49,492 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:21:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:21:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:50,148 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 429})\n",
+ " 22%|████████--------------------------------| 1592/7340 [55:31<200:29, 28.7 steps/min]\u001b[92m16:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:50,836 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "\u001b[92m16:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:51,527 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 74})\n",
+ " 22%|████████--------------------------------| 1593/7340 [55:33<200:25, 28.7 steps/min]\u001b[92m16:21:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:52,145 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 380})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:21:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:53,522 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 237})\n",
+ " 22%|████████--------------------------------| 1595/7340 [55:35<200:13, 28.7 steps/min]\u001b[92m16:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:54,168 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 324})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:21:54,889 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 40})\n",
+ "\u001b[92m16:21:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1597/7340 [55:36<199:58, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:55,529 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 197, 'y': 225}, {'x': 164, 'y': 131}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 197, 'y': 225}, {'x': 164, 'y': 131}]})\n",
+ "2025-08-11 16:21:56,213 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:21:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1599/7340 [55:37<199:44, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 22%|████████--------------------------------| 1600/7340 [55:38<199:38, 28.8 steps/min]\u001b[92m16:21:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:21:58,111 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 666})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 666})\n",
+ " 22%|████████--------------------------------| 1601/7340 [55:40<199:36, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:21:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1601/7340 [55:41<199:39, 28.7 steps/min]2025-08-11 16:22:00,484 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:22:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:22:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:01,153 - agent.ComputerAgent - INFO - Computer: click({'x': 567, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 567, 'y': 105})\n",
+ "2025-08-11 16:22:01,803 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:02,441 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:22:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1601/7340 [55:44<199:47, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:03,121 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:22:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:03,792 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:22:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:05,124 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 22%|████████--------------------------------| 1602/7340 [55:46<199:47, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:06,450 - agent.ComputerAgent - INFO - Computer: keykeypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keykeypress({'keys': 'ctrl+l'})\n",
+ "2025-08-11 16:22:06,451 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:22:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: keykeypress\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:22:07,085 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1603/7340 [55:48<199:45, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:07,745 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:08,780 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:22:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1603/7340 [55:51<199:53, 28.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:10,132 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:22:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:22:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:10,794 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 35})\n",
+ " 22%|████████--------------------------------| 1603/7340 [55:52<199:58, 28.7 steps/min]2025-08-11 16:22:11,849 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/reset \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1605/7340 [55:53<199:43, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1605/7340 [55:54<199:46, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:22:14,007 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:22:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1605/7340 [55:55<199:50, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:15,593 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:22:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f5de3982-4969-41f7-9f6f-19c347517b74/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1605/7340 [55:57<199:56, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:16,756 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:22:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1605/7340 [55:58<200:00, 28.7 steps/min]2025-08-11 16:22:17,396 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:22:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:18,753 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:22:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:19,391 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1606/7340 [56:01<200:00, 28.7 steps/min]\u001b[92m16:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:20,066 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 143})\n",
+ " 22%|████████--------------------------------| 1606/7340 [56:02<200:04, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:21,890 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n[14]\\tSteinberg, F. M., Bearden, M. M., & Keen, C. L. (2003). Cocoa and chocolate flavonoids: Implications for cardiovascular health. '})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n[14]\\tSteinberg, F. M., Bearden, M. M., & Keen, C. L. (2003). Cocoa and chocolate flavonoids: Implications for cardiovascular health. '})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1607/7340 [56:05<200:04, 28.7 steps/min]2025-08-11 16:22:23,885 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:25,233 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:22:25,234 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:26,542 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 22%|████████--------------------------------| 1608/7340 [56:08<200:06, 28.6 steps/min]\u001b[92m16:22:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:22:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:27,529 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:22:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:28,191 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 133})\n",
+ "2025-08-11 16:22:28,863 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:30,246 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:22:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:31,537 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:22:31,539 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1609/7340 [56:14<200:19, 28.6 steps/min]\u001b[92m16:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:22:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:34,870 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://upload.wikimedia.org/wikipedia/en/thumb/1/1e/The_University_of_Hong_Kong_crest.svg/1200px-The_University_of_Hong_Kong_crest.svg.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://upload.wikimedia.org/wikipedia/en/thumb/1/1e/The_University_of_Hong_Kong_crest.svg/1200px-The_University_of_Hong_Kong_crest.svg.png'})\n",
+ "2025-08-11 16:22:35,551 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 363})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 363})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:22:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:36,894 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ " 22%|████████--------------------------------| 1612/7340 [56:18<200:05, 28.6 steps/min]2025-08-11 16:22:37,581 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 74})\n",
+ "2025-08-11 16:22:38,237 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:22:38,238 - agent.ComputerAgent - INFO - Agent: Proceeding to copy 1.png from the Desktop and paste it at the current cursor position in the document.\n",
+ "INFO:agent.ComputerAgent:Agent: Proceeding to copy 1.png from the Desktop and paste it at the current cursor position in the document.\n",
+ "2025-08-11 16:22:38,239 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ "\u001b[92m16:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:38,876 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:40,229 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "2025-08-11 16:22:40,871 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 22%|████████--------------------------------| 1614/7340 [56:22<200:00, 28.6 steps/min]\u001b[92m16:22:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:41,513 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:22:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1617/7340 [56:23<199:35, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:43,205 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1617/7340 [56:24<199:40, 28.7 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:22:43,876 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1617/7340 [56:26<199:45, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:22:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:45,743 - agent.ComputerAgent - INFO - Computer: click({'x': 569, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 569, 'y': 446})\n",
+ " 22%|████████--------------------------------| 1617/7340 [56:27<199:49, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:46,899 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:22:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1618/7340 [56:28<199:44, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:22:48,180 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "2025-08-11 16:22:48,864 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:22:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:49,495 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1618/7340 [56:31<199:55, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:22:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:52,845 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "2025-08-11 16:22:53,518 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:22:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:54,185 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:22:54,842 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m16:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1619/7340 [56:36<200:02, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:55,516 - agent.ComputerAgent - INFO - Computer: click({'x': 899, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 899, 'y': 426})\n",
+ "2025-08-11 16:22:56,149 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 306})\n",
+ "\u001b[92m16:22:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1620/7340 [56:37<199:57, 28.6 steps/min]2025-08-11 16:22:56,770 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "2025-08-11 16:22:57,425 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1622/7340 [56:39<199:45, 28.6 steps/min]\u001b[92m16:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:22:58,706 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:22:59,352 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1623/7340 [56:41<199:42, 28.6 steps/min]\u001b[92m16:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:01,146 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 211})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/932fb6ee-8e77-41ca-8220-27e0c8783ced/close \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1625/7340 [56:42<199:27, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:23:03,155 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ " 22%|████████--------------------------------| 1626/7340 [56:44<199:25, 28.7 steps/min]\u001b[92m16:23:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:03,806 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:23:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]2025-08-11 16:23:04,637 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1626/7340 [56:46<199:30, 28.6 steps/min]2025-08-11 16:23:05,304 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:23:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:06,184 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it]INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:23:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:06,878 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:23:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 16:23:08,432 - agent.ComputerAgent - INFO - Computer: type({'text': 'Journal of the American Dietetic Association, 103(2), 215-223. doi: 10.1053/jada.2003.50028'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Journal of the American Dietetic Association, 103(2), 215-223. doi: 10.1053/jada.2003.50028'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1627/7340 [56:50<199:34, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:09,068 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:23:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:10,389 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1628/7340 [56:52<199:32, 28.6 steps/min]\u001b[92m16:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:11,064 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 105})\n",
+ "2025-08-11 16:23:11,742 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:23:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1628/7340 [56:54<199:39, 28.6 steps/min]2025-08-11 16:23:13,069 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 324})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:13,754 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1629/7340 [56:56<199:38, 28.6 steps/min]\u001b[92m16:23:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:15,745 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 141})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:23:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1630/7340 [56:58<199:34, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:17,067 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 675, 'scroll_x': 0, 'x': 73, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 675, 'scroll_x': 0, 'x': 73, 'y': 35})\n",
+ "2025-08-11 16:23:17,728 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:23:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:23:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1632/7340 [56:59<199:19, 28.6 steps/min]2025-08-11 16:23:18,411 - agent.ComputerAgent - INFO - Computer: click({'x': 638, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 638, 'y': 402})\n",
+ "2025-08-11 16:23:19,087 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1633/7340 [57:01<199:17, 28.6 steps/min]\u001b[92m16:23:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:23:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:21,008 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 53})\n",
+ "\u001b[92m16:23:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1635/7340 [57:02<199:02, 28.7 steps/min]2025-08-11 16:23:21,683 - agent.ComputerAgent - INFO - Computer: click({'x': 58, 'y': 96})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 58, 'y': 96})\n",
+ " 22%|████████--------------------------------| 1637/7340 [57:04<198:51, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:23,883 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:23:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:25,211 - agent.ComputerAgent - INFO - Computer: click({'x': 383, 'y': 76, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 383, 'y': 76, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1637/7340 [57:06<198:58, 28.7 steps/min]2025-08-11 16:23:25,863 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:26,527 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m16:23:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1638/7340 [57:08<198:54, 28.7 steps/min]2025-08-11 16:23:27,206 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:27,885 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1638/7340 [57:09<198:58, 28.7 steps/min]2025-08-11 16:23:28,557 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:23:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:29,215 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:23:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:29,865 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:23:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 22%|████████--------------------------------| 1639/7340 [57:11<198:56, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:31,205 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1639/7340 [57:12<199:00, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:31,858 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:33,183 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "2025-08-11 16:23:33,829 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ " 22%|████████--------------------------------| 1639/7340 [57:15<199:10, 28.6 steps/min]\u001b[92m16:23:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:35,115 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1640/7340 [57:16<199:05, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:35,796 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:36,479 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:23:36,480 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 271})\n",
+ " 22%|████████--------------------------------| 1640/7340 [57:18<199:09, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:23:37,136 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:23:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 22%|████████--------------------------------| 1642/7340 [57:19<198:54, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:38,985 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:40,977 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1642/7340 [57:22<199:06, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:41,599 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:42,917 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:23:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:23:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:44,246 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ " 22%|████████--------------------------------| 1642/7340 [57:26<199:18, 28.6 steps/min]\u001b[92m16:23:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:44,920 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 563})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 563})\n",
+ "\u001b[92m16:23:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:45,570 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 111})\n",
+ "2025-08-11 16:23:46,248 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:23:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:48,327 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 22%|████████--------------------------------| 1643/7340 [57:30<199:25, 28.6 steps/min]\u001b[92m16:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:49,640 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:51,546 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:23:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:23:52,212 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1645/7340 [57:35<199:22, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:23:54,230 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "2025-08-11 16:23:54,898 - agent.ComputerAgent - INFO - Computer: click({'x': 821, 'y': 476})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 821, 'y': 476})\n",
+ "\u001b[92m16:23:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1646/7340 [57:36<199:17, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:23:55,561 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 176})\n",
+ "2025-08-11 16:23:56,223 - agent.ComputerAgent - INFO - Computer: click({'x': 719, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 719, 'y': 133})\n",
+ "\u001b[92m16:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 22%|████████--------------------------------| 1648/7340 [57:37<199:03, 28.6 steps/min]2025-08-11 16:23:56,869 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 630})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 630})\n",
+ "2025-08-11 16:23:57,525 - agent.ComputerAgent - INFO - Computer: click({'x': 435, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 435, 'y': 254})\n",
+ " 22%|████████--------------------------------| 1650/7340 [57:39<198:49, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1652/7340 [57:40<198:34, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:23:59,329 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:00,670 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 35})\n",
+ " 23%|█████████-------------------------------| 1652/7340 [57:42<198:41, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:01,676 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:02,362 - agent.ComputerAgent - INFO - Computer: click({'x': 280, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 280, 'y': 386})\n",
+ " 23%|█████████-------------------------------| 1653/7340 [57:44<198:37, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:04,040 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 16:24:04,708 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 587\n",
+ " - prompt_tokens: 12138\n",
+ " - total_tokens: 12725\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0210\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 587\n",
+ " - prompt_tokens: 12138\n",
+ " - total_tokens: 12725\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0210\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:06,029 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1656/7340 [57:47<198:22, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:07,283 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:07,928 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:24:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:08,560 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 23%|█████████-------------------------------| 1656/7340 [57:50<198:31, 28.6 steps/min]\u001b[92m16:24:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:09,579 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:24:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:10,215 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:24:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:10,869 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m16:24:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:11,535 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:24:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1656/7340 [57:53<198:41, 28.6 steps/min]2025-08-11 16:24:12,199 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:24:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1656/7340 [57:54<198:46, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:13,576 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:24:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:15,280 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:15,939 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:16,616 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:24:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1656/7340 [57:59<199:01, 28.6 steps/min]\u001b[92m16:24:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:17,943 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:24:17,943 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 136, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 136, 'y': 741})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1656/7340 [58:00<199:05, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:19,270 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:24:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:24:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:19,943 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "2025-08-11 16:24:20,940 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:24:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:24:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1658/7340 [58:02<198:55, 28.6 steps/min]2025-08-11 16:24:21,578 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 69})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 69})\n",
+ " 23%|█████████-------------------------------| 1659/7340 [58:03<198:49, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1660/7340 [58:05<198:44, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:23,918 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:24:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1660/7340 [58:06<198:49, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:25,291 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 176})\n",
+ "\u001b[92m16:24:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:25,953 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:28,379 - agent.ComputerAgent - INFO - Computer: type({'text': ''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ''})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1664/7340 [58:10<198:24, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:24:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1684/7340 [58:12<195:28, 28.9 steps/min]2025-08-11 16:24:31,005 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 564})\n",
+ "\u001b[92m16:24:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:31,655 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:32,328 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:24:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:32,999 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1684/7340 [58:14<195:37, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:34,996 - agent.ComputerAgent - INFO - Computer: type({'text': '3'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '3'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:35,673 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 987, 'y': 658})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 987, 'y': 658})\n",
+ "2025-08-11 16:24:36,338 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:24:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1686/7340 [58:18<195:30, 28.9 steps/min]\u001b[92m16:24:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:37,031 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 280, 'y': 375}, {'x': 802, 'y': 446}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 280, 'y': 375}, {'x': 802, 'y': 446}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1688/7340 [58:19<195:17, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:38,396 - agent.ComputerAgent - INFO - Computer: click({'x': 60, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 60, 'y': 35})\n",
+ "\u001b[92m16:24:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e8a299f4-d946-4970-b9a4-2503717de8ce/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bc07116-76e3-42fb-a0e3-a2273a5caa64/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:39,048 - agent.ComputerAgent - INFO - Computer: click({'x': 478, 'y': 256})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 478, 'y': 256})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4e9c5c3-fa17-4f05-8383-03a3cb3c1fba/close \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1693/7340 [58:22<194:41, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:23<194:26, 29.0 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d8b3a739-de56-40fe-896f-831373c8ecee/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:25<194:33, 29.0 steps/min]\u001b[92m16:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:24:43,975 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m16:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:44,600 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:24:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 16:24:45,440 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:24:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:27<194:40, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 16:24:46,986 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:24:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:28<194:45, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 16:24:48,538 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:24:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]29.0 steps/min]\n",
+ "2025-08-11 16:24:49,198 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:24:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:49,856 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:24:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:24:50,863 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:24:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:32<194:58, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:51,758 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:24:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1695/7340 [58:33<195:01, 28.9 steps/min]\u001b[92m16:24:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:24:52,456 - agent.ComputerAgent - INFO - Computer: double_click({'x': 359, 'y': 78})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 359, 'y': 78})\n",
+ "\u001b[92m16:24:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:24:53,098 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 318})\n",
+ " 23%|█████████-------------------------------| 1697/7340 [58:37<194:57, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1697/7340 [58:39<195:01, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:24:58,685 - agent.ComputerAgent - INFO - Computer: keypressed({'keys': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypressed({'keys': 'left'})\n",
+ "2025-08-11 16:24:58,686 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: keypressed\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:24:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1697/7340 [58:41<195:11, 28.9 steps/min]\u001b[92m16:25:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:00,680 - agent.ComputerAgent - INFO - Computer: move({'x': 66, 'y': 344})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 66, 'y': 344})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:25:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:02,626 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'left'})\n",
+ "\u001b[92m16:25:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:25:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:25:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1698/7340 [58:46<195:17, 28.9 steps/min]2025-08-11 16:25:05,259 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:05,941 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:06,613 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 407})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 407})\n",
+ "2025-08-11 16:25:07,273 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 564})\n",
+ "\u001b[92m16:25:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1700/7340 [58:49<195:08, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:07,975 - agent.ComputerAgent - INFO - Computer: click({'x': 157, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 157, 'y': 81})\n",
+ "\u001b[92m16:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1702/7340 [58:50<194:53, 28.9 steps/min]\u001b[92m16:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:08,660 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 612})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 612})\n",
+ "\u001b[92m16:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:09,305 - agent.ComputerAgent - INFO - Computer: click({'x': 907, 'y': 491})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 907, 'y': 491})\n",
+ "\u001b[92m16:25:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ef139911-784a-4229-9f23-51d74cde7d59/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:09,932 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 196, 'y': 130}, {'x': 729, 'y': 400}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 196, 'y': 130}, {'x': 729, 'y': 400}]})\n",
+ " 23%|█████████-------------------------------| 1704/7340 [58:51<194:40, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:25:11,273 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:25:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1706/7340 [58:53<194:27, 29.0 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3ad517be-7b27-424d-b632-3ba6ff1a1e71/close \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1706/7340 [58:54<194:31, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:13,592 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:25:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:14,242 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:25:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1706/7340 [58:55<194:37, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:15,277 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:25:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 23%|█████████-------------------------------| 1707/7340 [58:57<194:32, 29.0 steps/min]2025-08-11 16:25:15,980 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:25:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:17,365 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1707/7340 [58:59<194:38, 28.9 steps/min]2025-08-11 16:25:17,982 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:18,638 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m16:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:19,317 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:25:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1708/7340 [59:01<194:36, 28.9 steps/min]2025-08-11 16:25:19,959 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:25:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:20,973 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:25:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1708/7340 [59:02<194:41, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1708/7340 [59:03<194:45, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 23%|█████████-------------------------------| 1708/7340 [59:05<194:52, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]2025-08-11 16:25:26,374 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]\u001b[92m16:25:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]28.9 steps/min]\n",
+ "2025-08-11 16:25:27,913 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:25:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1710/7340 [59:10<194:48, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:29,080 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:25:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:25:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1710/7340 [59:11<194:52, 28.9 steps/min]\u001b[92m16:25:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:30,432 - agent.ComputerAgent - INFO - Computer: click({'x': 152, 'y': 165, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 152, 'y': 165, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:31,713 - agent.ComputerAgent - INFO - Computer: type({'text': 'todo_list_Jan_2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'todo_list_Jan_2'})\n",
+ "2025-08-11 16:25:32,352 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 213})\n",
+ " 23%|█████████-------------------------------| 1710/7340 [59:14<195:01, 28.9 steps/min]\u001b[92m16:25:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:25:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:33,008 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "2025-08-11 16:25:33,643 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 356})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:34,975 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k'})\n",
+ " 23%|█████████-------------------------------| 1713/7340 [59:16<194:43, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:25:35,630 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:25:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:36,271 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:25:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:37,577 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 23%|█████████-------------------------------| 1716/7340 [59:19<194:25, 28.9 steps/min]2025-08-11 16:25:38,743 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m16:25:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:40,046 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1716/7340 [59:22<194:35, 28.9 steps/min]\u001b[92m16:25:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:41,319 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:25:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:25:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:41,962 - agent.ComputerAgent - INFO - Computer: click({'x': 540, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 540, 'y': 559})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1717/7340 [59:23<194:30, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:42,584 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:25:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:43,251 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:25:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:43,911 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:25:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:44,543 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:25:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1718/7340 [59:26<194:30, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:45,615 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:25:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 23%|█████████-------------------------------| 1719/7340 [59:27<194:25, 28.9 steps/min]2025-08-11 16:25:46,792 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:25:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 23%|█████████-------------------------------| 1719/7340 [59:28<194:28, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:47,794 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:25:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1719/7340 [59:30<194:34, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:49,152 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:25:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:25:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:49,847 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 35})\n",
+ " 23%|█████████-------------------------------| 1719/7340 [59:31<194:38, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:25:51,001 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:25:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:25:52,385 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k ctrl+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k ctrl+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1720/7340 [59:34<194:40, 28.9 steps/min]\u001b[92m16:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:25:53,711 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:55,023 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 77})\n",
+ " 23%|█████████-------------------------------| 1721/7340 [59:36<194:37, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:55,643 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:25:55,644 - agent.ComputerAgent - INFO - Computer: click({'x': 107, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 107, 'y': 33})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1722/7340 [59:37<194:32, 28.9 steps/min]2025-08-11 16:25:56,685 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:25:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:25:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:57,990 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1723/7340 [59:39<194:30, 28.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:25:59,157 - agent.ComputerAgent - INFO - Computer: click({'x': 979, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 979, 'y': 577})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1723/7340 [59:40<194:33, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ " 23%|█████████-------------------------------| 1724/7340 [59:41<194:28, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:01,475 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:26:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 23%|█████████-------------------------------| 1724/7340 [59:44<194:36, 28.9 steps/min]\u001b[92m16:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:03,496 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 59})\n",
+ "\u001b[92m16:26:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:04,153 - agent.ComputerAgent - INFO - Computer: click({'x': 178, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 178, 'y': 154})\n",
+ "\u001b[92m16:26:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1725/7340 [59:45<194:32, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:04,785 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ " 24%|█████████-------------------------------| 1727/7340 [59:46<194:17, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:06,480 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:26:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:07,785 - agent.ComputerAgent - INFO - Computer: type({'text': 'Klingon'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Klingon'})\n",
+ " 24%|█████████-------------------------------| 1728/7340 [59:49<194:17, 28.9 steps/min]2025-08-11 16:26:08,464 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:26:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8956c64b-871b-43e2-84de-047c8ce2a839/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:09,149 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:26:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1729/7340 [59:50<194:13, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:10,339 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:26:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1729/7340 [59:52<194:17, 28.9 steps/min]2025-08-11 16:26:11,342 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:26:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:12,638 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:13,323 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:14,632 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ " 24%|█████████-------------------------------| 1729/7340 [59:56<194:30, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:15,280 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:26:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:26:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:15,941 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:26:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:16,575 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 564})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1730/7340 [59:58<194:30, 28.8 steps/min]\u001b[92m16:26:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:26:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:18,413 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 318})\n",
+ " 24%|█████████-------------------------------| 1731/7340 [60:00<194:25, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:21,405 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:26:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:23,382 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:25,330 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 16:26:25,972 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 331\n",
+ " - prompt_tokens: 6007\n",
+ " - total_tokens: 6338\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0108\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 331\n",
+ " - prompt_tokens: 6007\n",
+ " - total_tokens: 6338\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0108\n",
+ " 24%|█████████-------------------------------| 1733/7340 [60:07<194:32, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:26:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:27,939 - agent.ComputerAgent - INFO - Agent: The directory has been renamed to \"todo_list_Jan_2\". Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The directory has been renamed to \"todo_list_Jan_2\". Task completed\n",
+ "2025-08-11 16:26:28,587 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 19\n",
+ " - prompt_tokens: 8070\n",
+ " - total_tokens: 8089\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 0\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0074\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 19\n",
+ " - prompt_tokens: 8070\n",
+ " - total_tokens: 8089\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 0\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0074\n",
+ "2025-08-11 16:26:29,225 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:31,196 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:26:31,198 - agent.ComputerAgent - INFO - Agent: I can’t translate the conversation because the provided image is a static screenshot with no accessible audio. If there’s “hidden” audio (e.g., embedded via steganography), I need the original file that contains the data—not a screenshot—to extract and transcribe it.\n",
+ "\n",
+ "Please provide one of the following:\n",
+ "- The actual audio file (mp3/wav/etc.), or\n",
+ "- The original carrier file that embeds the audio (e.g., the exact PNG/JPG used for steganography), or\n",
+ "- A link to download the file, or\n",
+ "- The transcript of the conversation.\n",
+ "\n",
+ "Once I have any of the above, I’ll immediately transcribe and translate the conversation into French.\n",
+ "\n",
+ "Unable to complete due to missing required data.\n",
+ "INFO:agent.ComputerAgent:Agent: I can’t translate the conversation because the provided image is a static screenshot with no accessible audio. If there’s “hidden” audio (e.g., embedded via steganography), I need the original file that contains the data—not a screenshot—to extract and transcribe it.\n",
+ "\n",
+ "Please provide one of the following:\n",
+ "- The actual audio file (mp3/wav/etc.), or\n",
+ "- The original carrier file that embeds the audio (e.g., the exact PNG/JPG used for steganography), or\n",
+ "- A link to download the file, or\n",
+ "- The transcript of the conversation.\n",
+ "\n",
+ "Once I have any of the above, I’ll immediately transcribe and translate the conversation into French.\n",
+ "\n",
+ "Unable to complete due to missing required data.\n",
+ "2025-08-11 16:26:31,868 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 861\n",
+ " - prompt_tokens: 1932\n",
+ " - total_tokens: 2793\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0110\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 861\n",
+ " - prompt_tokens: 1932\n",
+ " - total_tokens: 2793\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0110\n",
+ " 24%|█████████-------------------------------| 1735/7340 [60:13<194:33, 28.8 steps/min]2025-08-11 16:26:32,534 - agent.ComputerAgent - INFO - Computer: click({'x': 794, 'y': 394})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 794, 'y': 394})\n",
+ "2025-08-11 16:26:33,190 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 130, 'y': 740})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 130, 'y': 740})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:34,522 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 142})\n",
+ "\u001b[92m16:26:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1736/7340 [60:16<194:33, 28.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:35,158 - agent.ComputerAgent - INFO - Computer: click({'x': 156, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 156, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1739/7340 [60:17<194:11, 28.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:36,478 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 275})\n",
+ "\u001b[92m16:26:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:26:37,127 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 426})\n",
+ " 24%|█████████-------------------------------| 1740/7340 [60:18<194:06, 28.8 steps/min]2025-08-11 16:26:37,762 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:26:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1742/7340 [60:19<193:52, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:38,920 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:26:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:40,312 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "2025-08-11 16:26:40,961 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:26:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1742/7340 [60:22<194:01, 28.9 steps/min]2025-08-11 16:26:41,605 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:26:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:42,254 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:42,915 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:43,635 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:26:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1742/7340 [60:25<194:10, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:44,286 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:44,944 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:45,636 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:26:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:46,306 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:26:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 24%|█████████-------------------------------| 1768/7340 [60:28<190:34, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff65aa7f-5b38-4433-bea9-03a3667ea417/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/449bc839-4ba9-4d33-af59-182a2074d1ce/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1768/7340 [60:30<190:41, 29.2 steps/min]\u001b[92m16:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:49,169 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:26:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:26:50,474 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:26:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:26:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1768/7340 [60:33<190:51, 29.2 steps/min]2025-08-11 16:26:52,455 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:26:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:26:53,121 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 384})\n",
+ "2025-08-11 16:26:53,800 - agent.ComputerAgent - INFO - Computer: click({'x': 509, 'y': 144})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 509, 'y': 144})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:55,491 - agent.ComputerAgent - INFO - Computer: type({'text': 'Klingon'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Klingon'})\n",
+ " 24%|█████████-------------------------------| 1768/7340 [60:37<191:02, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:26:56,788 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 24%|█████████-------------------------------| 1772/7340 [60:38<190:33, 29.2 steps/min]2025-08-11 16:26:57,977 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:26:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/14b3ffc2-91e1-43c4-83b6-db17ba2bdb56/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1772/7340 [60:39<190:36, 29.2 steps/min]2025-08-11 16:26:58,628 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:26:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1772/7340 [60:41<190:40, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:00,603 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1772/7340 [60:42<190:44, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:02,580 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:27:02,580 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "2025-08-11 16:27:03,213 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:27:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:45<190:44, 29.2 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:27:04,067 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:27:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:27:04,725 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:27:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:46<190:49, 29.2 steps/min]2025-08-11 16:27:05,641 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]\u001b[92m16:27:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:47<190:52, 29.2 steps/min]2025-08-11 16:27:06,291 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:27:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:27:06,923 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:27:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:49<190:58, 29.2 steps/min]\u001b[92m16:27:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]2025-08-11 16:27:08,987 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:52<191:07, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:10,998 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:11,685 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 105})\n",
+ " 24%|█████████-------------------------------| 1773/7340 [60:53<191:11, 29.1 steps/min]\u001b[92m16:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:12,352 - agent.ComputerAgent - INFO - Computer: click({'x': 174, 'y': 601})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 174, 'y': 601})\n",
+ "\u001b[92m16:27:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:13,005 - agent.ComputerAgent - INFO - Computer: click({'x': 811, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 811, 'y': 75})\n",
+ " 24%|█████████-------------------------------| 1774/7340 [60:54<191:06, 29.1 steps/min]\u001b[92m16:27:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:13,653 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 111})\n",
+ " 24%|█████████-------------------------------| 1776/7340 [60:55<190:52, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9dca7e41-642b-4cca-8758-834cef0e844c/close \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1780/7340 [60:57<190:23, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:17,553 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 62})\n",
+ " 24%|█████████-------------------------------| 1780/7340 [60:59<190:30, 29.2 steps/min]\u001b[92m16:27:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:18,880 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:27:18,882 - agent.ComputerAgent - INFO - Computer: double_click({'x': 958, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 958, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]29.2 steps/min]2025-08-11 16:27:19,538 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:27:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1782/7340 [61:02<190:21, 29.2 steps/min]2025-08-11 16:27:21,019 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:27:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:27:21,656 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:27:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1782/7340 [61:03<190:26, 29.2 steps/min]2025-08-11 16:27:22,286 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 16:27:23,544 - agent.ComputerAgent - INFO - Computer: type({'text': ' tlh'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' tlh'})\n",
+ " 24%|█████████-------------------------------| 1783/7340 [61:06<190:26, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1783/7340 [61:07<190:30, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:26,595 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "\u001b[92m16:27:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:27,279 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 239})\n",
+ "2025-08-11 16:27:27,919 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 53})\n",
+ "\u001b[92m16:27:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:28,588 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:27:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1783/7340 [61:10<190:39, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:27:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:29,268 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 142})\n",
+ "2025-08-11 16:27:29,956 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:27:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:27:30,638 - agent.ComputerAgent - INFO - Computer: double_click({'x': 243, 'y': 119})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 243, 'y': 119})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1786/7340 [61:12<190:20, 29.2 steps/min]2025-08-11 16:27:31,327 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:27:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1788/7340 [61:14<190:09, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 24%|█████████-------------------------------| 1788/7340 [61:15<190:12, 29.2 steps/min]\u001b[92m16:27:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:34,728 - agent.ComputerAgent - INFO - Computer: click({'x': 464, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 464, 'y': 75})\n",
+ " 24%|█████████-------------------------------| 1789/7340 [61:17<190:10, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:36,881 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:27:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1789/7340 [61:18<190:14, 29.2 steps/min]2025-08-11 16:27:37,560 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:27:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:38,848 - agent.ComputerAgent - INFO - Computer: type({'text': 'LARS Resources'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'LARS Resources'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:39,534 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:27:40,168 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:27:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1789/7340 [61:21<190:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:41,534 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1790/7340 [61:23<190:22, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:42,875 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 232})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 24%|█████████-------------------------------| 1790/7340 [61:24<190:25, 29.1 steps/min]\u001b[92m16:27:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:43,898 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ "2025-08-11 16:27:44,535 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:27:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1791/7340 [61:26<190:21, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 24%|█████████-------------------------------| 1792/7340 [61:27<190:15, 29.2 steps/min]\u001b[92m16:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:46,357 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:46,991 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1792/7340 [61:28<190:20, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:48,333 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ " 24%|█████████-------------------------------| 1793/7340 [61:30<190:15, 29.2 steps/min]2025-08-11 16:27:49,478 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:27:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:50,804 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://drive.google.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://drive.google.com'})\n",
+ " 24%|█████████-------------------------------| 1793/7340 [61:32<190:23, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:51,468 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:27:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:27:52,107 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1794/7340 [61:34<190:21, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:53,438 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:27:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:27:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:54,125 - agent.ComputerAgent - INFO - Computer: click({'x': 578, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 578, 'y': 429})\n",
+ " 24%|█████████-------------------------------| 1794/7340 [61:35<190:25, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1795/7340 [61:37<190:22, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:27:57,004 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 24%|█████████-------------------------------| 1795/7340 [61:38<190:25, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:27:57,720 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 139})\n",
+ "2025-08-11 16:27:58,370 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 269})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 269})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1795/7340 [61:41<190:34, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 24%|█████████-------------------------------| 1797/7340 [61:42<190:21, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:01,555 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "2025-08-11 16:28:02,221 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ "\u001b[92m16:28:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 24%|█████████-------------------------------| 1797/7340 [61:43<190:25, 29.1 steps/min]2025-08-11 16:28:02,905 - agent.ComputerAgent - INFO - Computer: click({'x': 603, 'y': 570})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 603, 'y': 570})\n",
+ "2025-08-11 16:28:03,578 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1799/7340 [61:45<190:12, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1800/7340 [61:46<190:07, 29.1 steps/min]2025-08-11 16:28:05,256 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:28:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:06,585 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 25%|█████████-------------------------------| 1800/7340 [61:48<190:13, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:07,907 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1801/7340 [61:49<190:09, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:08,575 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:09,268 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:28:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1801/7340 [61:51<190:13, 29.1 steps/min]2025-08-11 16:28:09,949 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:28:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1802/7340 [61:52<190:09, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:11,277 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:28:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:11,927 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ " 25%|█████████-------------------------------| 1803/7340 [61:54<190:07, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:13,599 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:28:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1803/7340 [61:55<190:10, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:15,458 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1803/7340 [61:57<190:15, 29.1 steps/min]2025-08-11 16:28:16,439 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:28:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1803/7340 [61:58<190:20, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:17,754 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:18,784 - agent.ComputerAgent - INFO - Computer: click({'x': 456, 'y': 464})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 456, 'y': 464})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1803/7340 [62:01<190:27, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:20,800 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "\u001b[92m16:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1804/7340 [62:02<190:23, 29.1 steps/min]2025-08-11 16:28:21,439 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 145, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 145, 'y': 732})\n",
+ " 25%|█████████-------------------------------| 1815/7340 [62:03<188:54, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8354c81c-0b56-437e-9adf-dd5fd16e92df/close \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:04<188:50, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:06<188:54, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:08<189:00, 29.2 steps/min]\u001b[92m16:28:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:27,138 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:28:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 16:28:28,089 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:10<189:07, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]2025-08-11 16:28:29,639 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:28:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]29.2 steps/min]\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:12<189:13, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:13<189:16, 29.2 steps/min]\u001b[92m16:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:32,865 - agent.ComputerAgent - INFO - Computer: click({'x': 683, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 683, 'y': 617})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1816/7340 [62:15<189:22, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:34,194 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 10})\n",
+ "\u001b[92m16:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:34,868 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:35,525 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 75})\n",
+ " 25%|█████████-------------------------------| 1817/7340 [62:17<189:19, 29.2 steps/min]2025-08-11 16:28:36,163 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 286})\n",
+ "\u001b[92m16:28:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:36,809 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ " 25%|█████████-------------------------------| 1821/7340 [62:19<188:53, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1821/7340 [62:20<188:57, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:28:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:40,151 - agent.ComputerAgent - INFO - Computer: click({'x': 458, 'y': 464})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 458, 'y': 464})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1821/7340 [62:23<189:05, 29.2 steps/min]\u001b[92m16:28:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:28:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:43,092 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 141})\n",
+ "2025-08-11 16:28:43,749 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:28:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:28:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1822/7340 [62:25<189:03, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:28:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:44,470 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:45,130 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:45,800 - agent.ComputerAgent - INFO - Computer: click({'x': 688, 'y': 262})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 688, 'y': 262})\n",
+ "2025-08-11 16:28:46,446 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 731})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:47,762 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:28:47,762 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 25%|█████████-------------------------------| 1823/7340 [62:29<189:07, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:28:49,110 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:28:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1825/7340 [62:30<188:54, 29.2 steps/min]\u001b[92m16:28:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:28:49,787 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "2025-08-11 16:28:50,448 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:28:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1825/7340 [62:32<188:58, 29.2 steps/min]2025-08-11 16:28:51,128 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1826/7340 [62:35<188:59, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:28:55,007 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1826/7340 [62:36<189:04, 29.2 steps/min]2025-08-11 16:28:55,639 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:56,279 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:28:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:56,929 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:28:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:57,608 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:28:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1827/7340 [62:39<189:03, 29.2 steps/min]2025-08-11 16:28:58,287 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:28:58,959 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1827/7340 [62:40<189:07, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|█████████-------------------------------| 1827/7340 [62:41<189:10, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 25%|█████████-------------------------------| 1827/7340 [62:42<189:14, 29.1 steps/min]\u001b[92m16:29:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:01,834 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 552})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 552})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1827/7340 [62:44<189:18, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:03,626 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ " 25%|█████████-------------------------------| 1828/7340 [62:45<189:13, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 25%|█████████-------------------------------| 1829/7340 [62:46<189:08, 29.1 steps/min]\u001b[92m16:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:05,475 - agent.ComputerAgent - INFO - Computer: double_click({'x': 193, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 193, 'y': 111})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1829/7340 [62:47<189:13, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:06,779 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:07,468 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1830/7340 [62:49<189:10, 29.1 steps/min]\u001b[92m16:29:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:09,505 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -al\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -al\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1831/7340 [62:51<189:06, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:10,141 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1832/7340 [62:52<189:02, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:11,465 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:12,126 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:29:12,127 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 960, 'y': 713})\n",
+ "\u001b[92m16:29:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1833/7340 [62:53<188:58, 29.1 steps/min]2025-08-11 16:29:12,802 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 194, 'y': 131}, {'x': 745, 'y': 420}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 194, 'y': 131}, {'x': 745, 'y': 420}]})\n",
+ "2025-08-11 16:29:13,426 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:29:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|█████████-------------------------------| 1833/7340 [62:55<189:04, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:15,287 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|█████████-------------------------------| 1834/7340 [62:57<189:01, 29.1 steps/min]\u001b[92m16:29:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:29:17,192 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m16:29:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1835/7340 [62:58<188:56, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:29:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:17,861 - agent.ComputerAgent - INFO - Computer: click({'x': 486, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 486, 'y': 463})\n",
+ "\u001b[92m16:29:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:18,498 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 213})\n",
+ " 25%|██████████------------------------------| 1835/7340 [63:00<189:00, 29.1 steps/min]2025-08-11 16:29:19,169 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1837/7340 [63:01<188:47, 29.1 steps/min]2025-08-11 16:29:19,839 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:29:20,523 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1838/7340 [63:02<188:42, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:22,071 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m16:29:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1838/7340 [63:03<188:46, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1838/7340 [63:04<188:49, 29.1 steps/min]2025-08-11 16:29:23,788 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:29:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1838/7340 [63:05<188:52, 29.1 steps/min]2025-08-11 16:29:24,444 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 25%|██████████------------------------------| 1839/7340 [63:06<188:47, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:26,755 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 25%|██████████------------------------------| 1839/7340 [63:08<188:52, 29.1 steps/min]2025-08-11 16:29:27,902 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1839/7340 [63:09<188:55, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:28,590 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m16:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1839/7340 [63:11<189:00, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:30,449 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 148, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 148, 'y': 741})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1839/7340 [63:12<189:05, 29.1 steps/min]\u001b[92m16:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:32,274 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 324})\n",
+ " 25%|██████████------------------------------| 1841/7340 [63:14<188:52, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c062c21a-1b89-4117-86d3-d763f8af4cbd/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:33,640 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m16:29:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1842/7340 [63:16<188:50, 29.1 steps/min]\u001b[92m16:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d0000302-258b-4660-9baa-e149c2ad83fd/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:29:35,653 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1842/7340 [63:18<188:56, 29.1 steps/min]\u001b[92m16:29:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:37,169 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]\u001b[92m16:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/81398d20-3c85-489b-9abc-2af244ec1feb/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/reset \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1843/7340 [63:19<188:51, 29.1 steps/min]2025-08-11 16:29:37,829 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1843/7340 [63:20<188:54, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 16:29:41,122 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1843/7340 [63:23<189:04, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:42,562 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:43,962 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:29:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:44,623 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:29:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:29:45,291 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:47,330 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 16:29:47,981 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 523\n",
+ " - prompt_tokens: 10845\n",
+ " - total_tokens: 11368\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0166\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 523\n",
+ " - prompt_tokens: 10845\n",
+ " - total_tokens: 11368\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0166\n",
+ " 25%|██████████------------------------------| 1844/7340 [63:29<189:14, 29.0 steps/min]\u001b[92m16:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:48,637 - agent.ComputerAgent - INFO - Computer: click({'x': 983, 'y': 102})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 983, 'y': 102})\n",
+ "2025-08-11 16:29:49,306 - agent.ComputerAgent - INFO - Computer: double_click({'x': 279, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 279, 'y': 108})\n",
+ "\u001b[92m16:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:29:50,591 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:29:51,269 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "2025-08-11 16:29:51,916 - agent.ComputerAgent - INFO - Computer: click({'x': 523, 'y': 330})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 523, 'y': 330})\n",
+ "\u001b[92m16:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1845/7340 [63:33<189:18, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:29:52,579 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 48})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 48})\n",
+ "\u001b[92m16:29:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:53,701 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 619})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 619})\n",
+ " 25%|██████████------------------------------| 1849/7340 [63:35<188:50, 29.1 steps/min]2025-08-11 16:29:54,344 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ " 25%|██████████------------------------------| 1851/7340 [63:36<188:37, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:29:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1851/7340 [63:37<188:41, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:29:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:29:57,062 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 64})\n",
+ " 25%|██████████------------------------------| 1851/7340 [63:38<188:44, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:29:57,676 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m16:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1852/7340 [63:39<188:39, 29.1 steps/min]2025-08-11 16:29:59,322 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:29:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1852/7340 [63:41<188:44, 29.1 steps/min]\u001b[92m16:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:00,901 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:01,551 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:30:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:30:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1853/7340 [63:44<188:44, 29.1 steps/min]\u001b[92m16:30:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:03,257 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:03,866 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:04,553 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 308})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 308})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1853/7340 [63:47<188:52, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:06,542 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:30:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:30:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:07,830 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:30:07,832 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 351})\n",
+ "\u001b[92m16:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1863/7340 [63:49<187:38, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:08,525 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:30:08,526 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 649})\n",
+ "2025-08-11 16:30:09,184 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:09,881 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:30:09,882 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 427})\n",
+ "\u001b[92m16:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1864/7340 [63:51<187:36, 29.2 steps/min]2025-08-11 16:30:10,567 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "2025-08-11 16:30:11,204 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:30:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1866/7340 [63:52<187:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:12,354 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:30:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6b06a1a-197c-499e-a884-cc6bce509fa3/close \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1867/7340 [63:54<187:19, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1867/7340 [63:55<187:23, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1867/7340 [63:56<187:26, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]\u001b[92m16:30:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1868/7340 [63:57<187:21, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:16,274 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:30:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:17,196 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:30:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1868/7340 [63:58<187:25, 29.2 steps/min]2025-08-11 16:30:17,894 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:30:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]2025-08-11 16:30:18,906 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:30:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]29.2 steps/min]\n",
+ "2025-08-11 16:30:19,584 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:20,308 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:30:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 25%|██████████------------------------------| 1868/7340 [64:02<187:34, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 25%|██████████------------------------------| 1869/7340 [64:03<187:29, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:22,521 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:30:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 25%|██████████------------------------------| 1869/7340 [64:04<187:34, 29.2 steps/min]\u001b[92m16:30:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 25%|██████████------------------------------| 1869/7340 [64:06<187:38, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 25%|██████████------------------------------| 1869/7340 [64:08<187:44, 29.1 steps/min]\u001b[92m16:30:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:27,020 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:30:27,021 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:27,699 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 35})\n",
+ "2025-08-11 16:30:28,326 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 71})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 71})\n",
+ "2025-08-11 16:30:29,005 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -728, 'scroll_x': 0, 'x': 526, 'y': 572})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -728, 'scroll_x': 0, 'x': 526, 'y': 572})\n",
+ " 25%|██████████------------------------------| 1870/7340 [64:10<187:43, 29.1 steps/min]\u001b[92m16:30:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:30:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:29,701 - agent.ComputerAgent - INFO - Computer: click({'x': 717, 'y': 620})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 717, 'y': 620})\n",
+ "2025-08-11 16:30:30,379 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ " 26%|██████████------------------------------| 1876/7340 [64:13<187:02, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:32,063 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:30:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:33,967 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:35,332 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\nls -l\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\nls -l\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:30:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1876/7340 [64:18<187:17, 29.2 steps/min]2025-08-11 16:30:37,277 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:38,620 - agent.ComputerAgent - INFO - Computer: type({'text': 'mkdir -p fails; find . -type f -name \"*failed.ipynb\" -exec cp --parents -t ./fails {} +'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'mkdir -p fails; find . -type f -name \"*failed.ipynb\" -exec cp --parents -t ./fails {} +'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:39,950 - agent.ComputerAgent - INFO - Computer: double_click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 17, 'y': 335})\n",
+ "\u001b[92m16:30:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:30:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1878/7340 [64:22<187:13, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:41,287 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:30:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:41,941 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 213})\n",
+ "2025-08-11 16:30:42,616 - agent.ComputerAgent - INFO - Computer: click({'x': 552, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 552, 'y': 32})\n",
+ "\u001b[92m16:30:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:43,268 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:43,928 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 264})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 264})\n",
+ " 26%|██████████------------------------------| 1880/7340 [64:25<187:06, 29.2 steps/min]\u001b[92m16:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:44,573 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:30:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:45,273 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:30:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:45,907 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 562})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 562})\n",
+ "2025-08-11 16:30:46,574 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:30:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1883/7340 [64:28<186:50, 29.2 steps/min]2025-08-11 16:30:47,769 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:30:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1884/7340 [64:29<186:45, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:48,941 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:30:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1884/7340 [64:30<186:49, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1884/7340 [64:31<186:52, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 26%|██████████------------------------------| 1885/7340 [64:32<186:47, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:52,211 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:30:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1885/7340 [64:34<186:50, 29.2 steps/min]2025-08-11 16:30:52,886 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:30:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:30:53,542 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:30:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:30:54,852 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://extensions'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://extensions'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1885/7340 [64:36<186:58, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:56,157 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1887/7340 [64:37<186:46, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:30:56,844 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:30:57,553 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 169})\n",
+ "2025-08-11 16:30:58,185 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:30:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5da0c259-034b-4ba2-9e95-9d4ae99c7475/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1887/7340 [64:40<186:53, 29.2 steps/min]\u001b[92m16:30:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:30:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:00,575 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -671, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -671, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1889/7340 [64:43<186:45, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:31:01,878 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:02,570 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]29.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1890/7340 [64:45<186:43, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:04,382 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:31:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]\u001b[92m16:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]29.2 steps/min]\n",
+ " 26%|██████████------------------------------| 1890/7340 [64:48<186:53, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:08,315 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 26%|██████████------------------------------| 1891/7340 [64:50<186:49, 29.2 steps/min]\u001b[92m16:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:08,997 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 333})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:09,643 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1891/7340 [64:52<186:55, 29.2 steps/min]\u001b[92m16:31:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:10,946 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:12,203 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m16:31:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1892/7340 [64:54<186:54, 29.1 steps/min]\u001b[92m16:31:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:13,503 - agent.ComputerAgent - INFO - Computer: click({'x': 527, 'y': 387})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 527, 'y': 387})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:14,143 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:15,513 - agent.ComputerAgent - INFO - Computer: type({'text': 'exiftool heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'exiftool heron.jpeg\\n'})\n",
+ " 26%|██████████------------------------------| 1894/7340 [64:57<186:46, 29.2 steps/min]2025-08-11 16:31:16,160 - agent.ComputerAgent - INFO - Computer: click({'x': 387, 'y': 363})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 387, 'y': 363})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1896/7340 [64:58<186:34, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1897/7340 [64:59<186:29, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:18,679 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 314, 'y': 730}, {'x': 970, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 314, 'y': 730}, {'x': 970, 'y': 730}]})\n",
+ "\u001b[92m16:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:20,054 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 564})\n",
+ " 26%|██████████------------------------------| 1897/7340 [65:01<186:35, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:31:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:21,192 - agent.ComputerAgent - INFO - Computer: click({'x': 472, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 472, 'y': 62})\n",
+ " 26%|██████████------------------------------| 1900/7340 [65:02<186:14, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:22,523 - agent.ComputerAgent - INFO - Computer: type({'text': 'tlh'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'tlh'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1901/7340 [65:04<186:10, 29.2 steps/min]2025-08-11 16:31:23,153 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:31:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:23,837 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:31:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1902/7340 [65:06<186:09, 29.2 steps/min]2025-08-11 16:31:25,523 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:31:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:26,175 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1903/7340 [65:08<186:05, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:26,853 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:27,529 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 425})\n",
+ " 26%|██████████------------------------------| 1903/7340 [65:09<186:08, 29.2 steps/min]2025-08-11 16:31:28,192 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1904/7340 [65:10<186:05, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:29,530 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:31:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:30,507 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -592, 'scroll_x': 0, 'x': 989, 'y': 534})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -592, 'scroll_x': 0, 'x': 989, 'y': 534})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1904/7340 [65:12<186:09, 29.2 steps/min]2025-08-11 16:31:31,580 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1905/7340 [65:13<186:04, 29.2 steps/min]2025-08-11 16:31:32,585 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:33,223 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1905/7340 [65:16<186:13, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:35,243 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:35,903 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1906/7340 [65:17<186:09, 29.2 steps/min]2025-08-11 16:31:36,577 - agent.ComputerAgent - INFO - Computer: click({'x': 537, 'y': 568})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 537, 'y': 568})\n",
+ "2025-08-11 16:31:37,234 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 318})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1906/7340 [65:19<186:14, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:38,531 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:31:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:39,238 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 529})\n",
+ " 26%|██████████------------------------------| 1909/7340 [65:21<185:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:40,934 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:31:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1909/7340 [65:22<186:00, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1909/7340 [65:23<186:03, 29.2 steps/min]2025-08-11 16:31:43,154 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:31:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:44,904 - agent.ComputerAgent - INFO - Computer: type({'text': \"find ./fails -type f -print | sed 's|^./||'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"find ./fails -type f -print | sed 's|^./||'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:46,914 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:31:48,288 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://extensions'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://extensions'})\n",
+ " 26%|██████████------------------------------| 1910/7340 [65:30<186:12, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:49,680 - agent.ComputerAgent - INFO - Computer: type({'text': 'strings -n 8 heron.jpeg | head -n 20\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'strings -n 8 heron.jpeg | head -n 20\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:51,043 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1913/7340 [65:32<185:57, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:52,129 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 324})\n",
+ "2025-08-11 16:31:52,820 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:31:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:53,485 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1914/7340 [65:35<185:55, 29.2 steps/min]\u001b[92m16:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:31:54,556 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/reset \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1915/7340 [65:36<185:51, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1915/7340 [65:38<185:56, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:31:57,060 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:31:57,777 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 499, 'y': 731}, {'x': 354, 'y': 731}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 499, 'y': 731}, {'x': 354, 'y': 731}]})\n",
+ "\u001b[92m16:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1915/7340 [65:39<186:00, 29.2 steps/min]2025-08-11 16:31:58,450 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 594})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 594})\n",
+ " 26%|██████████------------------------------| 1916/7340 [65:40<185:55, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:00,109 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1918/7340 [65:42<185:45, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:01,445 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:32:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:32:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:02,102 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:02,763 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 526, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 526, 'y': 457})\n",
+ " 26%|██████████------------------------------| 1918/7340 [65:44<185:50, 29.2 steps/min]2025-08-11 16:32:03,402 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:04,095 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:32:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1919/7340 [65:46<185:49, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:05,825 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:06,519 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:32:06,520 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:07,891 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1919/7340 [65:49<185:57, 29.2 steps/min]2025-08-11 16:32:08,566 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:32:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:09,245 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:32:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:32:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:12,042 - agent.ComputerAgent - INFO - Computer: type({'text': 'LARS Resources (Backup)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'LARS Resources (Backup)'})\n",
+ " 26%|██████████------------------------------| 1921/7340 [65:53<185:53, 29.2 steps/min]\u001b[92m16:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:12,666 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 570})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 570})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:13,966 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:32:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1923/7340 [65:55<185:43, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:17,311 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 26%|██████████------------------------------| 1924/7340 [65:59<185:44, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:17,996 - agent.ComputerAgent - INFO - Computer: click({'x': 524, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 524, 'y': 334})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:19,977 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:32:19,977 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 403})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 403})\n",
+ "2025-08-11 16:32:20,632 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:32:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:21,994 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 26%|██████████------------------------------| 1924/7340 [66:03<185:57, 29.1 steps/min]\u001b[92m16:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:22,646 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:32:22,647 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "\u001b[92m16:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:23,365 - agent.ComputerAgent - INFO - Computer: click({'x': 707, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 707, 'y': 753})\n",
+ "\u001b[92m16:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1927/7340 [66:05<185:38, 29.2 steps/min]2025-08-11 16:32:24,005 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:24,684 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 318})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1929/7340 [66:06<185:26, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:25,727 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 405, 'y': 731}, {'x': 227, 'y': 731}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 405, 'y': 731}, {'x': 227, 'y': 731}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:27,066 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 26%|██████████------------------------------| 1930/7340 [66:08<185:25, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:28,204 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:32:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1932/7340 [66:09<185:12, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1937/7340 [66:10<184:36, 29.3 steps/min]2025-08-11 16:32:29,870 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:32:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f5de3982-4969-41f7-9f6f-19c347517b74/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1937/7340 [66:12<184:40, 29.3 steps/min]2025-08-11 16:32:31,201 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:32,253 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:32:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:33,548 - agent.ComputerAgent - INFO - Computer: type({'text': 'binwalk heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'binwalk heron.jpeg\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1937/7340 [66:15<184:48, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:34,225 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:34,865 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:35,536 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:36,206 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:36,843 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:37,504 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:32:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1938/7340 [66:19<184:53, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:39,227 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:32:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:32:40,549 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1938/7340 [66:22<185:00, 29.2 steps/min]2025-08-11 16:32:41,213 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:41,855 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963c77cb-da48-40d1-9aa4-74c4afb3b6d7/close \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1938/7340 [66:24<185:07, 29.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 26%|██████████------------------------------| 1938/7340 [66:25<185:10, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]ERROR:asyncio:Unclosed client session\n",
+ "client_session: \n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 26%|██████████------------------------------| 1938/7340 [66:27<185:15, 29.2 steps/min]\u001b[92m16:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.48s/it]\n",
+ "2025-08-11 16:32:46,573 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1939/7340 [66:29<185:13, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:32:48,835 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 26%|██████████------------------------------| 1939/7340 [66:32<185:19, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]29.1 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:53,100 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08LARS Resources (Backup)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08LARS Resources (Backup)'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:54,407 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ " 26%|██████████------------------------------| 1939/7340 [66:36<185:31, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:32:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:32:55,743 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 101})\n",
+ "2025-08-11 16:32:56,380 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 527})\n",
+ "2025-08-11 16:32:57,051 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 244})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:32:58,377 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 16:32:59,058 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:32:59,707 - agent.ComputerAgent - INFO - Computer: click({'x': 693, 'y': 698})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 693, 'y': 698})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 26%|██████████------------------------------| 1940/7340 [66:43<185:43, 29.1 steps/min]\u001b[92m16:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:02,296 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 77})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 26%|██████████------------------------------| 1945/7340 [66:44<185:07, 29.1 steps/min]\u001b[92m16:33:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:03,527 - agent.ComputerAgent - INFO - Computer: click({'x': 635, 'y': 468})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 635, 'y': 468})\n",
+ "\u001b[92m16:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:04,208 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -628, 'scroll_x': 0, 'x': 526, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -628, 'scroll_x': 0, 'x': 526, 'y': 463})\n",
+ "\u001b[92m16:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1946/7340 [66:45<185:03, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:33:04,854 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:05,505 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 169})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1948/7340 [66:47<184:53, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:07,437 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 181})\n",
+ "\u001b[92m16:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1949/7340 [66:49<184:49, 29.2 steps/min]2025-08-11 16:33:08,125 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 321})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:08,772 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:09,435 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:10,071 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:33:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1950/7340 [66:51<184:49, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:11,418 - agent.ComputerAgent - INFO - Computer: type({'text': 'orig=$(find . -path ./fails -prune -o -type f -name \"*failed.ipynb\" -print | wc -l); copied=$(find ./fails -type f -name \"*failed.ipynb\" -print | wc -l); echo \"orig=$orig copied=$copied\"'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'orig=$(find . -path ./fails -prune -o -type f -name \"*failed.ipynb\" -print | wc -l); copied=$(find ./fails -type f -name \"*failed.ipynb\" -print | wc -l); echo \"orig=$orig copied=$copied\"'})\n",
+ "2025-08-11 16:33:12,446 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1951/7340 [66:54<184:48, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:14,195 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo apt-get update -y && sudo apt-get install -y steghide binwalk exiftool ffmpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo apt-get update -y && sudo apt-get install -y steghide binwalk exiftool ffmpeg\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1952/7340 [66:55<184:44, 29.2 steps/min]2025-08-11 16:33:14,816 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:33:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:15,505 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:33:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:16,172 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/reset \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1953/7340 [66:57<184:42, 29.2 steps/min]2025-08-11 16:33:16,865 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1953/7340 [66:58<184:45, 29.2 steps/min]2025-08-11 16:33:17,496 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:33:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1953/7340 [67:01<184:53, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1953/7340 [67:03<184:58, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:22,384 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:23,684 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 95})\n",
+ " 27%|██████████------------------------------| 1953/7340 [67:05<185:03, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:24,329 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:33:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:33:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:27,033 - agent.ComputerAgent - INFO - Computer: click({'x': 592, 'y': 568})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 592, 'y': 568})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:33:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1954/7340 [67:09<185:06, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:28,365 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 213})\n",
+ "\u001b[92m16:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:29,040 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 427})\n",
+ "\u001b[92m16:33:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1955/7340 [67:10<185:02, 29.1 steps/min]2025-08-11 16:33:29,694 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 139})\n",
+ "2025-08-11 16:33:30,372 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ " 27%|██████████------------------------------| 1957/7340 [67:12<184:50, 29.1 steps/min]2025-08-11 16:33:31,077 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:31,780 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:33,106 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 27%|██████████------------------------------| 1959/7340 [67:14<184:42, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:34,429 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1959/7340 [67:16<184:46, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:35,086 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:35,779 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 73})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 73})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1959/7340 [67:17<184:50, 29.1 steps/min]2025-08-11 16:33:36,470 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:33:37,105 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:33:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1960/7340 [67:19<184:48, 29.1 steps/min]\u001b[92m16:33:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:38,439 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:39,107 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:33:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1960/7340 [67:20<184:51, 29.1 steps/min]\u001b[92m16:33:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:39,749 - agent.ComputerAgent - INFO - Computer: click({'x': 715, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 715, 'y': 627})\n",
+ " 27%|██████████------------------------------| 1960/7340 [67:21<184:54, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1962/7340 [67:22<184:41, 29.1 steps/min]2025-08-11 16:33:41,945 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/485267e4-f348-45f0-a08d-1d1f28a01f1d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1962/7340 [67:24<184:45, 29.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1962/7340 [67:25<184:48, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:33:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1962/7340 [67:26<184:52, 29.1 steps/min]2025-08-11 16:33:45,677 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:33:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 16:33:46,392 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:33:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1962/7340 [67:28<184:56, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]\u001b[92m16:33:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:49,339 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd /home/user && ls'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'cd /home/user && ls'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]29.1 steps/min]\n",
+ " 27%|██████████------------------------------| 1963/7340 [67:32<184:59, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 27%|██████████------------------------------| 1963/7340 [67:33<185:02, 29.1 steps/min]\u001b[92m16:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:52,223 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1963/7340 [67:34<185:06, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:53,553 - agent.ComputerAgent - INFO - Computer: click({'x': 199, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 199, 'y': 184})\n",
+ "2025-08-11 16:33:54,230 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:33:54,230 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:33:55,529 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 27%|██████████------------------------------| 1964/7340 [67:37<185:05, 29.0 steps/min]2025-08-11 16:33:56,191 - agent.ComputerAgent - INFO - Computer: double_click({'x': 525, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 525, 'y': 456})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:33:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:33:58,220 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:33:59,528 - agent.ComputerAgent - INFO - Computer: type({'text': 'pre.pptx'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'pre.pptx'})\n",
+ "2025-08-11 16:34:00,167 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ " 27%|██████████------------------------------| 1966/7340 [67:41<185:03, 29.0 steps/min]\u001b[92m16:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:00,843 - agent.ComputerAgent - INFO - Computer: click({'x': 334, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 334, 'y': 355})\n",
+ "2025-08-11 16:34:01,509 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1969/7340 [67:43<184:43, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:03,337 - agent.ComputerAgent - INFO - Computer: type({'text': 'Total'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Total'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1971/7340 [67:46<184:35, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1971/7340 [67:47<184:39, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:06,211 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:07,533 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 223, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 223, 'y': 739})\n",
+ "2025-08-11 16:34:08,150 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1971/7340 [67:49<184:46, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:09,527 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:10,190 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:10,838 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:11,476 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:12,120 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 71})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 71})\n",
+ "\u001b[92m16:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1972/7340 [67:53<184:49, 29.0 steps/min]2025-08-11 16:34:12,795 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:13,455 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -663, 'scroll_x': 0, 'x': 1008, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -663, 'scroll_x': 0, 'x': 1008, 'y': 404})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:14,812 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 27%|██████████------------------------------| 1973/7340 [67:56<184:49, 29.0 steps/min]2025-08-11 16:34:15,458 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:34:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1975/7340 [67:59<184:40, 29.1 steps/min]\u001b[92m16:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:18,098 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 283})\n",
+ "\u001b[92m16:34:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:18,738 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ "\u001b[92m16:34:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1975/7340 [68:00<184:44, 29.0 steps/min]2025-08-11 16:34:19,407 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:34:19,408 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1977/7340 [68:01<184:32, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:20,728 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:21,440 - agent.ComputerAgent - INFO - Computer: click({'x': 727, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 727, 'y': 574})\n",
+ " 27%|██████████------------------------------| 1978/7340 [68:03<184:28, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1979/7340 [68:04<184:24, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:23,905 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 167})\n",
+ "\u001b[92m16:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1979/7340 [68:05<184:27, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:24,540 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:25,201 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1980/7340 [68:07<184:23, 29.1 steps/min]2025-08-11 16:34:25,883 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:26,616 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:34:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:27,616 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:34:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:30,043 - agent.ComputerAgent - INFO - Computer: type({'text': 'Keira Daily'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Keira Daily'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1981/7340 [68:11<184:29, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1982/7340 [68:12<184:24, 29.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:31,749 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:32,455 - agent.ComputerAgent - INFO - Computer: double_click({'x': 422, 'y': 370})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 422, 'y': 370})\n",
+ "2025-08-11 16:34:33,138 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1982/7340 [68:14<184:29, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:33,812 - agent.ComputerAgent - INFO - Computer: click({'x': 359, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 359, 'y': 427})\n",
+ "2025-08-11 16:34:34,467 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:34:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1983/7340 [68:16<184:25, 29.0 steps/min]2025-08-11 16:34:35,116 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:17<184:21, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:36,799 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:18<184:24, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:19<184:27, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:38,979 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:34:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:20<184:30, 29.0 steps/min]2025-08-11 16:34:39,668 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:34:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:40,342 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:22<184:34, 29.0 steps/min]2025-08-11 16:34:41,020 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:34:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:23<184:36, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:24<184:40, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:43,719 - agent.ComputerAgent - INFO - Computer: click({'x': 314, 'y': 56})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 314, 'y': 56})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1984/7340 [68:26<184:44, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:45,578 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1985/7340 [68:28<184:43, 29.0 steps/min]\u001b[92m16:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:48,849 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:34:49,513 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 73})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 73})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:50,842 - agent.ComputerAgent - INFO - Computer: type({'text': '12'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '12'})\n",
+ " 27%|██████████------------------------------| 1986/7340 [68:32<184:46, 29.0 steps/min]\u001b[92m16:34:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:51,516 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 621})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 621})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:52,156 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 849, 'y': 502})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -658, 'scroll_x': 0, 'x': 849, 'y': 502})\n",
+ "\u001b[92m16:34:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:34:52,815 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 429})\n",
+ " 27%|██████████------------------------------| 1988/7340 [68:34<184:36, 29.0 steps/min]2025-08-11 16:34:53,478 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:34:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1991/7340 [68:35<184:16, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:34:55,323 - agent.ComputerAgent - INFO - Agent: Copied all files matching \"*failed.ipynb\" into ./fails with directory structure preserved.\n",
+ "\n",
+ "Summary:\n",
+ "- Copied 3 files.\n",
+ "- Verified: orig=3 copied=3\n",
+ "- Example paths now in ./fails:\n",
+ " - fails/server/test_environment/dir1/a_failed.ipynb\n",
+ " - fails/server/test_environment/dir2/subdir/c_failed.ipynb\n",
+ " - fails/server/test_environment/dir3/e_failed.ipynb\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Copied all files matching \"*failed.ipynb\" into ./fails with directory structure preserved.\n",
+ "\n",
+ "Summary:\n",
+ "- Copied 3 files.\n",
+ "- Verified: orig=3 copied=3\n",
+ "- Example paths now in ./fails:\n",
+ " - fails/server/test_environment/dir1/a_failed.ipynb\n",
+ " - fails/server/test_environment/dir2/subdir/c_failed.ipynb\n",
+ " - fails/server/test_environment/dir3/e_failed.ipynb\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:34:55,990 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 615\n",
+ " - prompt_tokens: 9473\n",
+ " - total_tokens: 10088\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0180\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 615\n",
+ " - prompt_tokens: 9473\n",
+ " - total_tokens: 10088\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0180\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1992/7340 [68:38<184:16, 29.0 steps/min]\u001b[92m16:34:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:34:57,996 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:34:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 1992/7340 [68:42<184:27, 29.0 steps/min]\u001b[92m16:35:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:01,246 - agent.ComputerAgent - INFO - Computer: click({'x': 442, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 442, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:02,538 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd /home/user && ls'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd /home/user && ls'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:03,880 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:35:03,881 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m16:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:35:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 27%|██████████------------------------------| 1993/7340 [68:46<184:30, 29.0 steps/min]2025-08-11 16:35:05,212 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 143})\n",
+ "2025-08-11 16:35:05,899 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 320})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 320})\n",
+ "2025-08-11 16:35:06,553 - agent.ComputerAgent - INFO - Computer: triple_click({'x': 489, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: triple_click({'x': 489, 'y': 440})\n",
+ "2025-08-11 16:35:06,554 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m16:35:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: triple_click\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:35:07,236 - agent.ComputerAgent - INFO - Computer: click({'x': 67, 'y': 343})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 67, 'y': 343})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:07,936 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:35:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:08,591 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:35:09,263 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:35:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 1996/7340 [68:51<184:20, 29.0 steps/min]2025-08-11 16:35:10,276 - agent.ComputerAgent - INFO - Computer: click({'x': 196, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 196, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:10,921 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 27%|██████████------------------------------| 2000/7340 [68:53<183:56, 29.0 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:12,276 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:35:12,982 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 285})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 285})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2001/7340 [68:54<183:52, 29.0 steps/min]2025-08-11 16:35:13,657 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:35:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 2002/7340 [68:55<183:47, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:35:14,845 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m16:35:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2013/7340 [68:56<182:26, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/488d7653-4f2d-4576-85c7-d87dc7a875ef/close \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2013/7340 [68:58<182:31, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2014/7340 [68:59<182:26, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:18,242 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:35:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:18,918 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:35:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:19,600 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:20,282 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:35:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d4054e85-5304-43a3-b6d7-128e302780cb/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2014/7340 [69:02<182:33, 29.2 steps/min]2025-08-11 16:35:20,970 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:35:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:21,619 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:35:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 2014/7340 [69:03<182:37, 29.2 steps/min]2025-08-11 16:35:22,290 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:22,980 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:35:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 27%|██████████------------------------------| 2015/7340 [69:04<182:33, 29.2 steps/min]2025-08-11 16:35:23,645 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:35:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:24,342 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:35:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 27%|██████████------------------------------| 2015/7340 [69:06<182:38, 29.2 steps/min]\u001b[92m16:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:25,689 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:35:27,029 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:28,399 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 27%|██████████------------------------------| 2015/7340 [69:10<182:47, 29.1 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 27%|██████████------------------------------| 2016/7340 [69:11<182:42, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 16:35:30,987 - agent.ComputerAgent - INFO - Agent: Your slides have been saved as pre.pptx on the Desktop. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Your slides have been saved as pre.pptx on the Desktop. Task completed.\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]2025-08-11 16:35:31,870 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 8059\n",
+ " - total_tokens: 8277\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0123\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 8059\n",
+ " - total_tokens: 8277\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0123\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 27%|██████████------------------------------| 2017/7340 [69:14<182:43, 29.1 steps/min]\u001b[92m16:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]2025-08-11 16:35:33,367 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:35:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]29.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d0000302-258b-4660-9baa-e149c2ad83fd/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 28%|███████████-----------------------------| 2031/7340 [69:17<181:08, 29.3 steps/min]\u001b[92m16:35:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:35:37,227 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 157})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]2025-08-11 16:35:38,649 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 103})\n",
+ "2025-08-11 16:35:39,288 - agent.ComputerAgent - INFO - Computer: double_click({'x': 432, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 432, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 16:35:40,926 - agent.ComputerAgent - INFO - Computer: type({'text': '50 Dollar Dollar'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '50 Dollar Dollar'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m16:35:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2032/7340 [69:23<181:15, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2036/7340 [69:24<180:49, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:43,616 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2046/7340 [69:25<179:38, 29.5 steps/min]\u001b[92m16:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:44,279 - agent.ComputerAgent - INFO - Computer: click({'x': 254, 'y': 357})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 254, 'y': 357})\n",
+ "\u001b[92m16:35:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:44,974 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 559})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f83df7e3-6ab0-404e-9745-09768e42b6fb/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:46,254 - agent.ComputerAgent - INFO - Computer: type({'text': 'exiftool heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'exiftool heron.jpeg\\n'})\n",
+ " 28%|███████████-----------------------------| 2047/7340 [69:28<179:37, 29.5 steps/min]2025-08-11 16:35:46,909 - agent.ComputerAgent - INFO - Computer: click({'x': 501, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 501, 'y': 33})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:35:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:35:48,925 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -644, 'scroll_x': 0, 'x': 849, 'y': 517})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -644, 'scroll_x': 0, 'x': 849, 'y': 517})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2050/7340 [69:30<179:22, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:35:49,587 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:35:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:50,254 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:50,911 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2052/7340 [69:32<179:13, 29.5 steps/min]2025-08-11 16:35:51,561 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:52,863 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:54,209 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(B2:B11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(B2:B11)'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:35:56,236 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:35:56,237 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+i'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+i'})\n",
+ " 28%|███████████-----------------------------| 2052/7340 [69:37<179:26, 29.5 steps/min]2025-08-11 16:35:56,862 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:35:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:35:57,541 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:35:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2055/7340 [69:39<179:08, 29.5 steps/min]2025-08-11 16:35:58,171 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:35:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2055/7340 [69:41<179:13, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 28%|███████████-----------------------------| 2056/7340 [69:42<179:08, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:01,538 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2056/7340 [69:43<179:11, 29.5 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "2025-08-11 16:36:02,197 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:36:02,882 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2056/7340 [69:44<179:14, 29.5 steps/min]2025-08-11 16:36:03,530 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:36:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]2025-08-11 16:36:04,206 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:36:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 28%|███████████-----------------------------| 2057/7340 [69:46<179:10, 29.5 steps/min]2025-08-11 16:36:04,901 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:36:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:05,756 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.74s/it]INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2057/7340 [69:47<179:14, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:06,385 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]\n",
+ "2025-08-11 16:36:08,367 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+='})\n",
+ " 28%|███████████-----------------------------| 2057/7340 [69:50<179:21, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:09,540 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:36:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 28%|███████████-----------------------------| 2058/7340 [69:51<179:17, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:36:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a0a74ba-160b-41ee-a6d2-6dc61c143d94/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4813e5e3-be12-40e2-9cc0-d5be0ad320cf/close \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2058/7340 [69:52<179:20, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:11,374 - agent.ComputerAgent - INFO - Computer: click({'x': 719, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 719, 'y': 294})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2058/7340 [69:54<179:25, 29.4 steps/min]\u001b[92m16:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:13,273 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:36:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:13,968 - agent.ComputerAgent - INFO - Computer: click({'x': 530, 'y': 359})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 530, 'y': 359})\n",
+ "2025-08-11 16:36:14,613 - agent.ComputerAgent - INFO - Computer: click({'x': 537, 'y': 568})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 537, 'y': 568})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2059/7340 [69:57<179:24, 29.4 steps/min]\u001b[92m16:36:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]2025-08-11 16:36:17,293 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2062/7340 [69:59<179:09, 29.5 steps/min]\u001b[92m16:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.56s/it]2025-08-11 16:36:18,808 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:36:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2062/7340 [70:00<179:12, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.55s/it]2025-08-11 16:36:20,504 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:36:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]29.4 steps/min]\n",
+ "2025-08-11 16:36:21,171 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:21,812 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:36:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 28%|███████████-----------------------------| 2062/7340 [70:04<179:23, 29.4 steps/min]\u001b[92m16:36:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:23,791 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:36:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:36:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:24,477 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:36:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:25,755 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+u'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+u'})\n",
+ " 28%|███████████-----------------------------| 2062/7340 [70:07<179:29, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:26,411 - agent.ComputerAgent - INFO - Computer: click({'x': 241, 'y': 208})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 241, 'y': 208})\n",
+ "\u001b[92m16:36:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:27,062 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:27,734 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 849, 'y': 512}, {'x': 849, 'y': 414}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 849, 'y': 512}, {'x': 849, 'y': 414}]})\n",
+ " 28%|███████████-----------------------------| 2064/7340 [70:09<179:20, 29.4 steps/min]\u001b[92m16:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:28,917 - agent.ComputerAgent - INFO - Computer: drag({'start_element_description': 'Cell A1', 'end_element_description': 'Cell A18', 'x': 87, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'start_element_description': 'Cell A1', 'end_element_description': 'Cell A18', 'x': 87, 'y': 168})\n",
+ " 28%|███████████-----------------------------| 2067/7340 [70:10<179:01, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:29,543 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:36:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:30,886 - agent.ComputerAgent - INFO - Computer: type({'text': 'gnome-screenshot -f /home/user/Desktop/ls.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'gnome-screenshot -f /home/user/Desktop/ls.png'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2067/7340 [70:12<179:06, 29.4 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:36:31,562 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:36:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:32,875 - agent.ComputerAgent - INFO - Computer: type({'text': 'steghide extract -sf heron.jpeg -p \"\"\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'steghide extract -sf heron.jpeg -p \"\"\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2068/7340 [70:15<179:06, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:34,225 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:36:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:36:34,902 - agent.ComputerAgent - INFO - Computer: click({'x': 157, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 157, 'y': 89})\n",
+ " 28%|███████████-----------------------------| 2071/7340 [70:16<178:47, 29.5 steps/min]2025-08-11 16:36:35,553 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:36,863 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 28%|███████████-----------------------------| 2072/7340 [70:18<178:45, 29.5 steps/min]2025-08-11 16:36:37,530 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2073/7340 [70:19<178:41, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:38,882 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:36:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:39,562 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2073/7340 [70:22<178:47, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:40,916 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:36:41,601 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:36:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2074/7340 [70:23<178:43, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:42,295 - agent.ComputerAgent - INFO - Computer: click({'x': 568, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 568, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:43,301 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2076/7340 [70:25<178:33, 29.5 steps/min]2025-08-11 16:36:43,958 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:44,603 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:36:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2077/7340 [70:26<178:29, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:45,772 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:36:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2078/7340 [70:28<178:26, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:47,052 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:36:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:36:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:47,726 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 32})\n",
+ " 28%|███████████-----------------------------| 2078/7340 [70:29<178:30, 29.5 steps/min]2025-08-11 16:36:48,394 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:36:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:49,033 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 28%|███████████-----------------------------| 2081/7340 [70:30<178:11, 29.5 steps/min]2025-08-11 16:36:49,689 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2081/7340 [70:31<178:14, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:51,337 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:36:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2081/7340 [70:33<178:17, 29.5 steps/min]2025-08-11 16:36:52,005 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:36:52,641 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2081/7340 [70:34<178:20, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 28%|███████████-----------------------------| 2082/7340 [70:35<178:16, 29.5 steps/min]\u001b[92m16:36:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:54,516 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:36:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:36:56,522 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2083/7340 [70:38<178:17, 29.5 steps/min]\u001b[92m16:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:36:58,442 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 434, 'y': 399})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 434, 'y': 399})\n",
+ "\u001b[92m16:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:36:59,777 - agent.ComputerAgent - INFO - Computer: type({'text': 'Settings'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Settings'})\n",
+ " 28%|███████████-----------------------------| 2085/7340 [70:41<178:10, 29.5 steps/min]2025-08-11 16:37:00,463 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 219})\n",
+ "\u001b[92m16:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:01,096 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:37:01,098 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 657})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 657})\n",
+ " 28%|███████████-----------------------------| 2087/7340 [70:42<177:59, 29.5 steps/min]2025-08-11 16:37:01,730 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:37:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2089/7340 [70:43<177:47, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:02,903 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:37:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2089/7340 [70:46<177:55, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:06,123 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 28%|███████████-----------------------------| 2090/7340 [70:47<177:50, 29.5 steps/min]2025-08-11 16:37:06,753 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:07,384 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:37:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2090/7340 [70:49<177:53, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:37:08,744 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:37:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:37:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 28%|███████████-----------------------------| 2090/7340 [70:50<177:57, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:09,396 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 121})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 121})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:37:10,082 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:37:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 28%|███████████-----------------------------| 2091/7340 [70:51<177:53, 29.5 steps/min]2025-08-11 16:37:10,733 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:37:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4adb2bbf-d6e6-4d15-9e9a-c199cf02d5d6/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:11,402 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:37:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:37:12,752 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2092/7340 [70:54<177:52, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2092/7340 [70:55<177:55, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 29%|███████████-----------------------------| 2092/7340 [70:56<177:57, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m16:37:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2093/7340 [70:58<177:55, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]\u001b[92m16:37:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2093/7340 [70:59<177:58, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:18,513 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:37:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:37:19,415 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:37:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2093/7340 [71:01<178:02, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ " 29%|███████████-----------------------------| 2093/7340 [71:02<178:04, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2094/7340 [71:03<178:00, 29.5 steps/min]\u001b[92m16:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:37:22,152 - agent.ComputerAgent - INFO - Computer: double_click({'x': 193, 'y': 119})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 193, 'y': 119})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:37:22,786 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:37:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:37:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2094/7340 [71:04<178:03, 29.5 steps/min]\u001b[92m16:37:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:23,472 - agent.ComputerAgent - INFO - Computer: click({'x': 991, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 991, 'y': 34})\n",
+ "2025-08-11 16:37:24,132 - agent.ComputerAgent - INFO - Computer: double_click({'x': 986, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 986, 'y': 574})\n",
+ "\u001b[92m16:37:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fd628f34-1346-4947-bfa4-cf698adb3472/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2095/7340 [71:05<177:59, 29.5 steps/min]2025-08-11 16:37:24,807 - agent.ComputerAgent - INFO - Computer: double_click({'x': 388, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 388, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:37:26,868 - agent.ComputerAgent - INFO - Computer: type({'text': 'steghide extract -sf heron.jpeg -p heron\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'steghide extract -sf heron.jpeg -p heron\\n'})\n",
+ " 29%|███████████-----------------------------| 2097/7340 [71:08<177:52, 29.5 steps/min]\u001b[92m16:37:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:27,483 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:28,774 - agent.ComputerAgent - INFO - Computer: type({'text': 'Professional writing is more than correct grammar and polished vocabulary; it is a way of thinking that clarifies ideas for both writer and reader. A focused introduction guides attention, precise word choice reduces confusion, and a logical flow builds trust. When we shape sentences deliberately, we shape how others understand our work and how confidently we can stand behind it.'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Professional writing is more than correct grammar and polished vocabulary; it is a way of thinking that clarifies ideas for both writer and reader. A focused introduction guides attention, precise word choice reduces confusion, and a logical flow builds trust. When we shape sentences deliberately, we shape how others understand our work and how confidently we can stand behind it.'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2099/7340 [71:11<177:44, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:31,422 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:37:31,422 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 29%|███████████-----------------------------| 2101/7340 [71:13<177:35, 29.5 steps/min]\u001b[92m16:37:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:37:32,113 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 760})\n",
+ "\u001b[92m16:37:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:32,763 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:37:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:37:33,415 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 755})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 755})\n",
+ " 29%|███████████-----------------------------| 2101/7340 [71:15<177:40, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2103/7340 [71:16<177:28, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81398d20-3c85-489b-9abc-2af244ec1feb/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:37:35,415 - agent.ComputerAgent - INFO - Computer: double_click({'x': 599, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 599, 'y': 88})\n",
+ " 29%|███████████-----------------------------| 2103/7340 [71:17<177:31, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:36,715 - agent.ComputerAgent - INFO - Agent: The ls command was executed in /home/user, and a screenshot of the output has been saved as ls.png on the Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The ls command was executed in /home/user, and a screenshot of the output has been saved as ls.png on the Desktop.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:37:37,358 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 421\n",
+ " - prompt_tokens: 7387\n",
+ " - total_tokens: 7808\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0113\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 421\n",
+ " - prompt_tokens: 7387\n",
+ " - total_tokens: 7808\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0113\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2105/7340 [71:19<177:21, 29.5 steps/min]2025-08-11 16:37:37,996 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:37:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:38,634 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:37:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2106/7340 [71:20<177:17, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:39,285 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:40,326 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:37:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:41,703 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3980166d-0a7d-4a58-a915-07dbe8b607bb/close \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2106/7340 [71:23<177:25, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:43,033 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:37:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:24<176:04, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:43,705 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:37:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:25<176:06, 29.7 steps/min]2025-08-11 16:37:44,360 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:37:45,042 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:37:45,725 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3ea8855-19d9-4e10-8208-fd9e060997e3/close \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:28<176:13, 29.6 steps/min]2025-08-11 16:37:47,444 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:37:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:30<176:18, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:50,134 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:37:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:31<176:21, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:37:51,286 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:37:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:33<176:24, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:36<176:32, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2118/7340 [71:37<176:35, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 16:37:57,568 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 16:37:58,224 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 267\n",
+ " - prompt_tokens: 6737\n",
+ " - total_tokens: 7004\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0111\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 267\n",
+ " - prompt_tokens: 6737\n",
+ " - total_tokens: 7004\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0111\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 16:38:00,670 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 29%|███████████-----------------------------| 2119/7340 [71:42<176:40, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:02,618 - agent.ComputerAgent - INFO - Computer: type({'text': '.odp'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '.odp'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2120/7340 [71:45<176:40, 29.5 steps/min]\u001b[92m16:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:03,929 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 36})\n",
+ "\u001b[92m16:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:04,573 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 627})\n",
+ "2025-08-11 16:38:05,236 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "\u001b[92m16:38:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2121/7340 [71:46<176:37, 29.5 steps/min]2025-08-11 16:38:05,880 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 10})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2124/7340 [71:49<176:23, 29.6 steps/min]\u001b[92m16:38:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:09,200 - agent.ComputerAgent - INFO - Computer: type({'text': 'binwalk heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'binwalk heron.jpeg\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:10,480 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:38:10,480 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "\u001b[92m16:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2125/7340 [71:52<176:22, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:11,123 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 759})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 759})\n",
+ "2025-08-11 16:38:11,750 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:38:11,751 - agent.ComputerAgent - INFO - Agent: Opening the terminal to run commands to recursively find .jpg files in the 'photos' directory and copy them into the 'cpjpg' directory.\n",
+ "INFO:agent.ComputerAgent:Agent: Opening the terminal to run commands to recursively find .jpg files in the 'photos' directory and copy them into the 'cpjpg' directory.\n",
+ "2025-08-11 16:38:11,752 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "\u001b[92m16:38:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2127/7340 [71:53<176:11, 29.6 steps/min]2025-08-11 16:38:12,402 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 204})\n",
+ " 29%|███████████-----------------------------| 2129/7340 [71:54<176:00, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2130/7340 [71:56<175:58, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2130/7340 [71:57<176:00, 29.6 steps/min]\u001b[92m16:38:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:16,773 - agent.ComputerAgent - INFO - Computer: click({'x': 852, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 852, 'y': 77})\n",
+ " 29%|███████████-----------------------------| 2130/7340 [71:58<176:03, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:38:17,936 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:38:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2131/7340 [71:59<175:59, 29.6 steps/min]2025-08-11 16:38:18,635 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:38:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:19,337 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:19,971 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:01<176:04, 29.6 steps/min]2025-08-11 16:38:20,654 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:38:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:21,344 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:03<176:07, 29.6 steps/min]2025-08-11 16:38:22,853 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:38:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:23,529 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:38:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:24,165 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:05<176:14, 29.6 steps/min]2025-08-11 16:38:24,857 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:25,499 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m16:38:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:07<176:17, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:08<176:19, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:09<176:22, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:10<176:24, 29.5 steps/min]2025-08-11 16:38:29,227 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:11<176:27, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6d8a38cc-c8f6-484c-9a6d-e6c404b2c7f9/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2131/7340 [72:12<176:31, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:38:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:32,244 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2132/7340 [72:17<176:34, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:38:36,689 - agent.ComputerAgent - INFO - Computer: type({'text': 'Text Editor'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Text Editor'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2132/7340 [72:19<176:39, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:38:38,166 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:38:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2133/7340 [72:20<176:35, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2133/7340 [72:23<176:42, 29.5 steps/min]\u001b[92m16:38:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:38:42,432 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 29%|███████████-----------------------------| 2133/7340 [72:24<176:44, 29.5 steps/min]\u001b[92m16:38:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 16:38:44,437 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://developer.apple.com/design/human-interface-guidelines/searching'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://developer.apple.com/design/human-interface-guidelines/searching'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:45,807 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2133/7340 [72:28<176:54, 29.4 steps/min]\u001b[92m16:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:47,050 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:38:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:38:47,722 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 527})\n",
+ "2025-08-11 16:38:48,409 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 580, 'scroll_x': 0, 'x': 334, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 580, 'scroll_x': 0, 'x': 334, 'y': 334})\n",
+ "2025-08-11 16:38:49,093 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 724, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 724, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:50,435 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 422})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 422})\n",
+ " 29%|███████████-----------------------------| 2134/7340 [72:32<176:57, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:38:51,133 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 76})\n",
+ "\u001b[92m16:38:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:51,807 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:38:51,807 - agent.ComputerAgent - INFO - Computer: click({'x': 197, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 197, 'y': 175})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:38:53,157 - agent.ComputerAgent - INFO - Computer: type({'text': 'steghide info -sf heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'steghide info -sf heron.jpeg\\n'})\n",
+ " 29%|███████████-----------------------------| 2138/7340 [72:34<176:35, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2141/7340 [72:35<176:17, 29.5 steps/min]\u001b[92m16:38:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:38:54,993 - agent.ComputerAgent - INFO - Computer: double_click({'x': 295, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 295, 'y': 131})\n",
+ " 29%|███████████-----------------------------| 2142/7340 [72:39<176:20, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:38:59,255 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2142/7340 [72:41<176:24, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:00,595 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:01,255 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 130})\n",
+ " 29%|███████████-----------------------------| 2142/7340 [72:42<176:27, 29.5 steps/min]2025-08-11 16:39:01,916 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:02,605 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:39:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2143/7340 [72:44<176:24, 29.5 steps/min]2025-08-11 16:39:03,285 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:39:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:03,927 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:39:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:04,564 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:39:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2155/7340 [72:46<175:05, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:05,881 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 29%|███████████-----------------------------| 2155/7340 [72:47<175:08, 29.6 steps/min]2025-08-11 16:39:06,548 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m16:39:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:07,201 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:39:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2156/7340 [72:48<175:04, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b741091-faa0-4d97-9592-0dc410b6cc53/close \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2156/7340 [72:49<175:07, 29.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2156/7340 [72:51<175:12, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2156/7340 [72:52<175:14, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "\u001b[92m16:39:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]175:17, 29.6 steps/min]2025-08-11 16:39:12,919 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:39:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:13,552 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:39:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 29%|███████████-----------------------------| 2156/7340 [72:55<175:20, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 16:39:18,291 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 29%|███████████-----------------------------| 2156/7340 [73:00<175:31, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2157/7340 [73:01<175:27, 29.5 steps/min]\u001b[92m16:39:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:19,982 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 298})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:21,940 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+end'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+end'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:23,311 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m16:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2157/7340 [73:05<175:36, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:23,970 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:24,630 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 202})\n",
+ "2025-08-11 16:39:25,286 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 639, 'scroll_x': 0, 'x': 336, 'y': 341})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 639, 'scroll_x': 0, 'x': 336, 'y': 341})\n",
+ "\u001b[92m16:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:25,947 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 592, 'scroll_x': 0, 'x': 434, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 592, 'scroll_x': 0, 'x': 434, 'y': 426})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2159/7340 [73:07<175:29, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:27,939 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -n -t cpjpg -- {} +\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -l cpjpg | wc -l'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -n -t cpjpg -- {} +\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -l cpjpg | wc -l'})\n",
+ "\u001b[92m16:39:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2162/7340 [73:09<175:13, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:28,584 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 29%|███████████-----------------------------| 2163/7340 [73:11<175:09, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:29,929 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:39:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:39:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:30,614 - agent.ComputerAgent - INFO - Computer: click({'x': 627, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 627, 'y': 248})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 29%|███████████-----------------------------| 2164/7340 [73:12<175:05, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 29%|███████████-----------------------------| 2165/7340 [73:13<175:01, 29.6 steps/min]\u001b[92m16:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:32,433 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 40})\n",
+ " 29%|███████████-----------------------------| 2165/7340 [73:14<175:03, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:34,364 - agent.ComputerAgent - INFO - Computer: type({'text': 'steghide info heron.jpeg\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'steghide info heron.jpeg\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2166/7340 [73:16<175:01, 29.6 steps/min]2025-08-11 16:39:34,988 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:35,667 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:17<174:57, 29.6 steps/min]2025-08-11 16:39:36,713 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:39:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/reset \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:18<174:59, 29.6 steps/min]2025-08-11 16:39:37,345 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:39:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:38,016 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:19<175:03, 29.6 steps/min]2025-08-11 16:39:38,707 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:39,360 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:39:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:40,042 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:21<175:07, 29.5 steps/min]\u001b[92m16:39:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:39:40,691 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m16:39:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:22<175:10, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:42,368 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:39:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:24<175:13, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:44,223 - agent.ComputerAgent - INFO - Computer: type({'text': '1
\\n2
\\n3
'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '1
\\n2
\\n3
'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2167/7340 [73:25<175:17, 29.5 steps/min]2025-08-11 16:39:45,389 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2168/7340 [73:27<175:13, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2168/7340 [73:28<175:16, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:48,220 - agent.ComputerAgent - INFO - Computer: type({'text': 'Manchester, GB'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Manchester, GB'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8956c64b-871b-43e2-84de-047c8ce2a839/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2168/7340 [73:30<175:22, 29.5 steps/min]\u001b[92m16:39:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|███████████-----------------------------| 2169/7340 [73:31<175:17, 29.5 steps/min]\u001b[92m16:39:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:39:50,735 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:39:50,735 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 173})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2169/7340 [73:32<175:20, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:39:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:39:52,098 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:39:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2170/7340 [73:33<175:15, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 16:39:54,085 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2170/7340 [73:36<175:22, 29.5 steps/min]\u001b[92m16:39:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:39:56,139 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2171/7340 [73:37<175:18, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2171/7340 [73:38<175:21, 29.5 steps/min]2025-08-11 16:39:58,028 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2171/7340 [73:39<175:23, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|███████████-----------------------------| 2171/7340 [73:40<175:25, 29.5 steps/min]\u001b[92m16:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:39:59,694 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 426})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2171/7340 [73:44<175:33, 29.4 steps/min]\u001b[92m16:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:02,952 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:40:02,953 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:04,954 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 20, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 20, 'y': 44})\n",
+ "\u001b[92m16:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2172/7340 [73:46<175:32, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:05,643 - agent.ComputerAgent - INFO - Computer: click({'x': 452, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 452, 'y': 305})\n",
+ "2025-08-11 16:40:06,342 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:40:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:40:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:07,039 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 56})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 56})\n",
+ "\u001b[92m16:40:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:40:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2174/7340 [73:49<175:25, 29.4 steps/min]2025-08-11 16:40:08,345 - agent.ComputerAgent - INFO - Computer: click({'x': 224, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 224, 'y': 53})\n",
+ "2025-08-11 16:40:09,049 - agent.ComputerAgent - INFO - Computer: double_click({'x': 323, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 323, 'y': 88})\n",
+ "\u001b[92m16:40:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:09,740 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 318, 'y': 427}, {'x': 209, 'y': 490}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 318, 'y': 427}, {'x': 209, 'y': 490}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2176/7340 [73:52<175:18, 29.5 steps/min]\u001b[92m16:40:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:40:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:11,819 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 627})\n",
+ "\u001b[92m16:40:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2179/7340 [73:53<175:00, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:12,515 - agent.ComputerAgent - INFO - Computer: click({'x': 436, 'y': 106})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 436, 'y': 106})\n",
+ "\u001b[92m16:40:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:13,161 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 270})\n",
+ " 30%|███████████-----------------------------| 2182/7340 [73:58<174:53, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:18,762 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2182/7340 [74:00<174:56, 29.5 steps/min]2025-08-11 16:40:19,427 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:40:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:20,107 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:20,777 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2182/7340 [74:02<175:01, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:21,467 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:40:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:22,106 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2182/7340 [74:03<175:04, 29.5 steps/min]2025-08-11 16:40:22,763 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:23,786 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:40:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2182/7340 [74:05<175:08, 29.4 steps/min]2025-08-11 16:40:24,448 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:40:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:25,098 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m16:40:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:25,758 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:40:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2182/7340 [74:08<175:14, 29.4 steps/min]\u001b[92m16:40:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:40:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:27,580 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 503})\n",
+ " 30%|███████████-----------------------------| 2183/7340 [74:10<175:13, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:29,237 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2183/7340 [74:11<175:15, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2183/7340 [74:12<175:18, 29.4 steps/min]2025-08-11 16:40:31,570 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:40:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:33,254 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+end'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+end'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2183/7340 [74:15<175:25, 29.4 steps/min]\u001b[92m16:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:34,550 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:35,218 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2183/7340 [74:16<175:28, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:35,898 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:36,525 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:40:36,527 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 385})\n",
+ " 30%|███████████-----------------------------| 2184/7340 [74:18<175:25, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:38,413 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 30%|███████████-----------------------------| 2185/7340 [74:20<175:22, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2186/7340 [74:21<175:18, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:40,608 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:41,919 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -t cpjpg -- {} +\\nls -l cpjpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -t cpjpg -- {} +\\nls -l cpjpg'})\n",
+ " 30%|███████████-----------------------------| 2186/7340 [74:23<175:24, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:43,223 - agent.ComputerAgent - INFO - Computer: type({'text': 'vim show absolute line numbers tutorial set number'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'vim show absolute line numbers tutorial set number'})\n",
+ " 30%|███████████-----------------------------| 2187/7340 [74:24<175:20, 29.4 steps/min]2025-08-11 16:40:43,898 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:40:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2188/7340 [74:26<175:17, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:46,007 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:40:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:40:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:47,317 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:40:47,318 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+pagedown'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+pagedown'})\n",
+ " 30%|███████████-----------------------------| 2188/7340 [74:29<175:23, 29.4 steps/min]2025-08-11 16:40:47,987 - agent.ComputerAgent - INFO - Computer: click({'x': 622, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 622, 'y': 227})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:40:48,619 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2188/7340 [74:30<175:26, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|███████████-----------------------------| 2189/7340 [74:32<175:24, 29.4 steps/min]\u001b[92m16:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:51,323 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:51,985 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 336, 'y': 374})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 336, 'y': 374})\n",
+ "\u001b[92m16:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:53,244 - agent.ComputerAgent - INFO - Computer: type({'text': 'output.txt'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'output.txt'})\n",
+ " 30%|███████████-----------------------------| 2189/7340 [74:34<175:30, 29.3 steps/min]\u001b[92m16:40:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:53,885 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2191/7340 [74:37<175:22, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:56,252 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:40:56,902 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 157}, {'x': 991, 'y': 568}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 157}, {'x': 991, 'y': 568}]})\n",
+ "\u001b[92m16:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2192/7340 [74:38<175:18, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:40:57,930 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:40:58,595 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 291})\n",
+ "\u001b[92m16:40:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2193/7340 [74:41<175:18, 29.4 steps/min]2025-08-11 16:41:00,572 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 760})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:01,211 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m16:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2194/7340 [74:43<175:14, 29.4 steps/min]2025-08-11 16:41:01,875 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 192})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 192})\n",
+ "\u001b[92m16:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:41:03,169 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:41:04,448 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:41:04,449 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "2025-08-11 16:41:05,117 - agent.ComputerAgent - INFO - Computer: move({'x': 19, 'y': 43})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 19, 'y': 43})\n",
+ " 30%|███████████-----------------------------| 2195/7340 [74:46<175:16, 29.4 steps/min]2025-08-11 16:41:05,732 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|███████████-----------------------------| 2199/7340 [74:48<174:54, 29.4 steps/min]\u001b[92m16:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:41:07,718 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:09,059 - agent.ComputerAgent - INFO - Computer: click({'x': 132, 'y': 212})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 132, 'y': 212})\n",
+ "\u001b[92m16:41:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|███████████-----------------------------| 2199/7340 [74:50<174:58, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:09,719 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:41:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:10,396 - agent.ComputerAgent - INFO - Computer: click({'x': 530, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 530, 'y': 162})\n",
+ "\u001b[92m16:41:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|███████████-----------------------------| 2200/7340 [74:52<174:56, 29.4 steps/min]2025-08-11 16:41:11,698 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:12,356 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:13,036 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 76})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:14,339 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|███████████-----------------------------| 2201/7340 [74:56<174:57, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:15,718 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 16:41:16,412 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 704})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 704})\n",
+ "\u001b[92m16:41:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2202/7340 [74:58<174:55, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:17,098 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:17,733 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:18,419 - agent.ComputerAgent - INFO - Computer: click({'x': 663, 'y': 434})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 663, 'y': 434})\n",
+ " 30%|████████████----------------------------| 2204/7340 [75:00<174:47, 29.4 steps/min]2025-08-11 16:41:19,068 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:41:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2205/7340 [75:01<174:42, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2205/7340 [75:02<174:45, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:41:21,399 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:41:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:22,050 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2205/7340 [75:03<174:48, 29.4 steps/min]2025-08-11 16:41:22,737 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 34})\n",
+ "2025-08-11 16:41:23,755 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:41:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2205/7340 [75:05<174:52, 29.4 steps/min]2025-08-11 16:41:24,783 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2206/7340 [75:06<174:48, 29.4 steps/min]2025-08-11 16:41:25,407 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:41:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:26,049 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2206/7340 [75:07<174:50, 29.4 steps/min]2025-08-11 16:41:26,755 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:41:28,070 - agent.ComputerAgent - INFO - Computer: type({'text': '\\nrm -rf cpjpg\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -n -t cpjpg -- {} +\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -1 cpjpg | wc -l\\nls -l cpjpg | head'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\nrm -rf cpjpg\\nmkdir -p cpjpg\\nfind photos -type f -iname \"*.jpg\" -exec cp -n -t cpjpg -- {} +\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -1 cpjpg | wc -l\\nls -l cpjpg | head'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2206/7340 [75:10<174:57, 29.3 steps/min]\u001b[92m16:41:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:29,855 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 0})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 0})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:41:31,140 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+end'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+end'})\n",
+ " 30%|████████████----------------------------| 2207/7340 [75:12<174:55, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:33,746 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ " 30%|████████████----------------------------| 2208/7340 [75:15<174:55, 29.3 steps/min]\u001b[92m16:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:34,413 - agent.ComputerAgent - INFO - Computer: click({'x': 576, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 576, 'y': 355})\n",
+ "2025-08-11 16:41:35,086 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 463})\n",
+ "\u001b[92m16:41:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2209/7340 [75:16<174:51, 29.3 steps/min]2025-08-11 16:41:35,730 - agent.ComputerAgent - INFO - Computer: click({'x': 337, 'y': 325})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 337, 'y': 325})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:41:36,378 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2211/7340 [75:18<174:40, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b2ca79e3-4425-4cd4-a9dd-42e2431eb008/close \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:19<174:38, 29.4 steps/min]2025-08-11 16:41:38,696 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m16:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:21<174:42, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:22<174:45, 29.3 steps/min]2025-08-11 16:41:42,015 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:24<174:48, 29.3 steps/min]2025-08-11 16:41:43,346 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]2025-08-11 16:41:44,648 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:41:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:27<174:56, 29.3 steps/min]\u001b[92m16:41:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]2025-08-11 16:41:47,446 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:41:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:29<174:59, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:31<175:05, 29.3 steps/min]\u001b[92m16:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:50,630 - agent.ComputerAgent - INFO - Computer: double_click({'x': 194, 'y': 696})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 194, 'y': 696})\n",
+ "\u001b[92m16:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:51,282 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 222})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 222})\n",
+ " 30%|████████████----------------------------| 2212/7340 [75:33<175:08, 29.3 steps/min]\u001b[92m16:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:51,914 - agent.ComputerAgent - INFO - Computer: wait({'x': 516, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({'x': 516, 'y': 162})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:52,574 - agent.ComputerAgent - INFO - Computer: click({'x': 218, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 218, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2215/7340 [75:35<174:52, 29.3 steps/min]2025-08-11 16:41:53,930 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 130})\n",
+ "2025-08-11 16:41:54,577 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 35})\n",
+ "2025-08-11 16:41:55,167 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:41:55,168 - agent.ComputerAgent - INFO - Computer: click({'x': 27, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 27, 'y': 10})\n",
+ "\u001b[92m16:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:41:56,505 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 663, 'scroll_x': 0, 'x': 336, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 663, 'scroll_x': 0, 'x': 336, 'y': 375})\n",
+ "\u001b[92m16:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2216/7340 [75:38<174:53, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:41:58,166 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 538}, {'x': 989, 'y': 599}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 538}, {'x': 989, 'y': 599}]})\n",
+ "\u001b[92m16:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2220/7340 [75:39<174:30, 29.3 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:41:58,807 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:41:59,458 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 129, 'y': 257}, {'x': 316, 'y': 644}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 129, 'y': 257}, {'x': 316, 'y': 644}]})\n",
+ " 30%|████████████----------------------------| 2221/7340 [75:41<174:26, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 30%|████████████----------------------------| 2222/7340 [75:43<174:24, 29.3 steps/min]\u001b[92m16:42:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:01,963 - agent.ComputerAgent - INFO - Computer: click({'x': 616, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 616, 'y': 483})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:02,627 - agent.ComputerAgent - INFO - Computer: click({'x': 247, 'y': 570})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 247, 'y': 570})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:03,988 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2222/7340 [75:45<174:30, 29.3 steps/min]2025-08-11 16:42:04,647 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:05,277 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:42:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2225/7340 [75:47<174:14, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:06,606 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:42:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:42:07,268 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/reset \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2225/7340 [75:49<174:17, 29.3 steps/min]2025-08-11 16:42:07,952 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 287})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2225/7340 [75:51<174:22, 29.3 steps/min]\u001b[92m16:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:09,935 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:10,595 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2226/7340 [75:52<174:18, 29.3 steps/min]2025-08-11 16:42:11,249 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 577})\n",
+ "\u001b[92m16:42:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:11,880 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 139})\n",
+ " 30%|████████████----------------------------| 2226/7340 [75:53<174:21, 29.3 steps/min]2025-08-11 16:42:12,556 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:42:13,246 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:42:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:42:13,927 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:42:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2228/7340 [75:55<174:12, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:14,567 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:15,235 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2229/7340 [75:56<174:08, 29.3 steps/min]2025-08-11 16:42:15,916 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e7117b51-399c-45d8-88a1-c54a00b2bc38/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2229/7340 [75:58<174:13, 29.3 steps/min]\u001b[92m16:42:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:17,879 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:42:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:18,577 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:00<174:16, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:01<174:18, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:20,791 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:42:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:02<174:21, 29.3 steps/min]2025-08-11 16:42:21,476 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:42:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/reset \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:03<174:24, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:04<174:27, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:05<174:29, 29.3 steps/min]2025-08-11 16:42:24,437 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 30%|████████████----------------------------| 2229/7340 [76:06<174:31, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 30%|████████████----------------------------| 2230/7340 [76:07<174:27, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:42:27,176 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:42:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2230/7340 [76:10<174:32, 29.3 steps/min]\u001b[92m16:42:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2230/7340 [76:13<174:40, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 30%|████████████----------------------------| 2231/7340 [76:14<174:35, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:34,009 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 30%|████████████----------------------------| 2231/7340 [76:15<174:38, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:34,629 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:35,308 - agent.ComputerAgent - INFO - Computer: click({'x': 735, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 735, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:37,887 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:42:37,888 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "\u001b[92m16:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:38,541 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 760})\n",
+ "\u001b[92m16:42:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 30%|████████████----------------------------| 2231/7340 [76:23<174:56, 29.2 steps/min]\u001b[92m16:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:42,545 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 751})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 751})\n",
+ "2025-08-11 16:42:43,201 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:42:43,202 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 21, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 21, 'y': 81})\n",
+ "2025-08-11 16:42:43,855 - agent.ComputerAgent - INFO - Computer: double_click({'x': 17, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 17, 'y': 286})\n",
+ "2025-08-11 16:42:44,505 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 573})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 573})\n",
+ "\u001b[92m16:42:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:45,226 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:42:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:45,879 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 148})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:46,508 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 281})\n",
+ "\u001b[92m16:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 30%|████████████----------------------------| 2234/7340 [76:28<174:46, 29.2 steps/min]\u001b[92m16:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:47,146 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:48,457 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 336, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 336, 'y': 153})\n",
+ "2025-08-11 16:42:49,109 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 235})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 235})\n",
+ "2025-08-11 16:42:49,801 - agent.ComputerAgent - INFO - Computer: double_click({'x': 620, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 620, 'y': 483})\n",
+ "\u001b[92m16:42:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 31%|████████████----------------------------| 2240/7340 [76:31<174:14, 29.3 steps/min]2025-08-11 16:42:50,459 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 52})\n",
+ "\u001b[92m16:42:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:51,112 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:42:51,112 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 140})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 140})\n",
+ " 31%|████████████----------------------------| 2246/7340 [76:33<173:38, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:52,759 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2246/7340 [76:35<173:41, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:42:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:54,581 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 56})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 56})\n",
+ " 31%|████████████----------------------------| 2246/7340 [76:36<173:44, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2247/7340 [76:37<173:40, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:42:56,618 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:42:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2247/7340 [76:39<173:44, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:42:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:42:58,589 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 31%|████████████----------------------------| 2248/7340 [76:40<173:40, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:42:59,266 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:42:59,953 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 40})\n",
+ "2025-08-11 16:43:00,570 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:01,237 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2248/7340 [76:42<173:46, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:01,918 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 10})\n",
+ "2025-08-11 16:43:02,565 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:43:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:43:03,237 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:43:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:03,870 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:43:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2249/7340 [76:45<173:45, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:04,517 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:05,166 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:46<173:41, 29.3 steps/min]2025-08-11 16:43:05,808 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:06,508 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:43:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:48<173:44, 29.3 steps/min]2025-08-11 16:43:07,154 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:07,827 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:49<173:47, 29.3 steps/min]2025-08-11 16:43:08,506 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:43:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:09,126 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:43:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:50<173:51, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:51<173:53, 29.3 steps/min]\u001b[92m16:43:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:10,937 - agent.ComputerAgent - INFO - Computer: click({'x': 448, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 448, 'y': 223})\n",
+ " 31%|████████████----------------------------| 2250/7340 [76:52<173:55, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:43:12,089 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:43:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2251/7340 [76:53<173:51, 29.3 steps/min]2025-08-11 16:43:13,102 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2251/7340 [76:56<173:56, 29.3 steps/min]\u001b[92m16:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:15,993 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 604, 'scroll_x': 0, 'x': 512, 'y': 375})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 604, 'scroll_x': 0, 'x': 512, 'y': 375})\n",
+ " 31%|████████████----------------------------| 2251/7340 [76:57<173:59, 29.2 steps/min]\u001b[92m16:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:16,685 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 145})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 145})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:43:17,342 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:43:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2253/7340 [76:59<173:49, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2254/7340 [77:00<173:45, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:43:19,120 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:20,445 - agent.ComputerAgent - INFO - Computer: click({'x': 991, 'y': 337})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 991, 'y': 337})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2254/7340 [77:02<173:51, 29.3 steps/min]\u001b[92m16:43:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2255/7340 [77:04<173:47, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:23,114 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 304})\n",
+ "2025-08-11 16:43:23,758 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 523})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 523})\n",
+ "\u001b[92m16:43:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2255/7340 [77:05<173:50, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:24,421 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 438})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 438})\n",
+ "2025-08-11 16:43:25,073 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2258/7340 [77:07<173:34, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:26,368 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:43:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:43:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:27,053 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:43:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:27,771 - agent.ComputerAgent - INFO - Computer: click({'x': 584, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 584, 'y': 105})\n",
+ " 31%|████████████----------------------------| 2260/7340 [77:09<173:26, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2261/7340 [77:10<173:22, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2261/7340 [77:11<173:24, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:30,835 - agent.ComputerAgent - INFO - Computer: double_click({'x': 620, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 620, 'y': 483})\n",
+ "2025-08-11 16:43:31,493 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 390})\n",
+ "\u001b[92m16:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2261/7340 [77:13<173:27, 29.3 steps/min]2025-08-11 16:43:32,111 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:43:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:32,746 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:43:34,074 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2263/7340 [77:15<173:20, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:35,393 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2264/7340 [77:17<173:16, 29.3 steps/min]2025-08-11 16:43:36,070 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 336, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 336, 'y': 117})\n",
+ "2025-08-11 16:43:36,698 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:43:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:37,379 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:43:38,722 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 31%|████████████----------------------------| 2265/7340 [77:20<173:17, 29.3 steps/min]2025-08-11 16:43:39,391 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:43:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:40,020 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:40,700 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:42,045 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 31%|████████████----------------------------| 2266/7340 [77:23<173:18, 29.3 steps/min]\u001b[92m16:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:43,354 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m16:43:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2266/7340 [77:25<173:21, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:44,018 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 35})\n",
+ "2025-08-11 16:43:44,674 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:43:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:45,322 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2266/7340 [77:27<173:27, 29.3 steps/min]\u001b[92m16:43:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:47,297 - agent.ComputerAgent - INFO - Computer: type({'text': '\\ncd ~/Desktop\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -1 cpjpg | wc -l\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\ncd ~/Desktop\\nfind photos -type f -iname \"*.jpg\" | wc -l\\nls -1 cpjpg | wc -l\\n'})\n",
+ " 31%|████████████----------------------------| 2267/7340 [77:29<173:23, 29.3 steps/min]\u001b[92m16:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:47,938 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 41})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2268/7340 [77:30<173:18, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:49,780 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2269/7340 [77:31<173:15, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:50,431 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 305})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:52,761 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2269/7340 [77:35<173:23, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:54,070 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:54,709 - agent.ComputerAgent - INFO - Computer: click({'x': 452, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 452, 'y': 305})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:56,071 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 520, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 520, 'y': 437})\n",
+ "\u001b[92m16:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2271/7340 [77:38<173:17, 29.2 steps/min]2025-08-11 16:43:57,347 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:43:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:43:57,989 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 527})\n",
+ "\u001b[92m16:43:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:43:58,630 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ " 31%|████████████----------------------------| 2273/7340 [77:40<173:08, 29.3 steps/min]\u001b[92m16:43:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:43:59,300 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:00,864 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "2025-08-11 16:44:01,500 - agent.ComputerAgent - INFO - Computer: click({'x': 789, 'y': 403})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 789, 'y': 403})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:03,185 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 31%|████████████----------------------------| 2275/7340 [77:44<173:05, 29.3 steps/min]2025-08-11 16:44:03,823 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:04,510 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:44:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2276/7340 [77:46<173:02, 29.3 steps/min]2025-08-11 16:44:05,200 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:05,893 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2276/7340 [77:47<173:05, 29.3 steps/min]2025-08-11 16:44:06,551 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:07,182 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:44:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2276/7340 [77:49<173:08, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:08,527 - agent.ComputerAgent - INFO - Computer: type({'text': 'bash -lc \\'printf \"\\\\n\\\\\" Enable absolute line numbers by default\\\\nset number\\\\n\" >> ~/.vimrc && echo Done\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'bash -lc \\'printf \"\\\\n\\\\\" Enable absolute line numbers by default\\\\nset number\\\\n\" >> ~/.vimrc && echo Done\\''})\n",
+ " 31%|████████████----------------------------| 2276/7340 [77:50<173:11, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:10,219 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:44:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2277/7340 [77:52<173:08, 29.2 steps/min]2025-08-11 16:44:10,891 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:44:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:11,562 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2277/7340 [77:53<173:11, 29.2 steps/min]2025-08-11 16:44:12,212 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:44:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:14,267 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:16,173 - agent.ComputerAgent - INFO - Agent: The heading is now center-aligned and the document has been saved. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The heading is now center-aligned and the document has been saved. Task completed.\n",
+ "2025-08-11 16:44:16,851 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 4658\n",
+ " - total_tokens: 4876\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0080\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 4658\n",
+ " - total_tokens: 4876\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0080\n",
+ " 31%|████████████----------------------------| 2279/7340 [77:58<173:09, 29.2 steps/min]2025-08-11 16:44:17,521 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:18,174 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:19,530 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:44:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2279/7340 [78:01<173:17, 29.2 steps/min]\u001b[92m16:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:20,843 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 193})\n",
+ "\u001b[92m16:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:44:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:22,141 - agent.ComputerAgent - INFO - Computer: click({'x': 776, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 776, 'y': 643})\n",
+ "2025-08-11 16:44:22,791 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 31%|████████████----------------------------| 2279/7340 [78:04<173:22, 29.2 steps/min]\u001b[92m16:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:44:23,485 - agent.ComputerAgent - INFO - Computer: click({'x': 977, 'y': 16})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 977, 'y': 16})\n",
+ "2025-08-11 16:44:24,132 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 162})\n",
+ "\u001b[92m16:44:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2281/7340 [78:05<173:12, 29.2 steps/min]2025-08-11 16:44:24,809 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 248})\n",
+ " 31%|████████████----------------------------| 2283/7340 [78:06<173:01, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:26,462 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:44:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2284/7340 [78:08<172:58, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:28,788 - agent.ComputerAgent - INFO - Computer: type({'text': 'none.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'none.png'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2284/7340 [78:11<173:06, 29.2 steps/min]\u001b[92m16:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:32,685 - agent.ComputerAgent - INFO - Computer: type({'text': 'pkill -f soffice.bin\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'pkill -f soffice.bin\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:33,322 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 632})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 632})\n",
+ "\u001b[92m16:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:44:34,677 - agent.ComputerAgent - INFO - Agent: All .jpg files from the photos directory (recursively) have been copied into cpjpg on the Desktop. Verified: 4 .jpg files found and 4 copied.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: All .jpg files from the photos directory (recursively) have been copied into cpjpg on the Desktop. Verified: 4 .jpg files found and 4 copied.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:44:35,306 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 494\n",
+ " - prompt_tokens: 8268\n",
+ " - total_tokens: 8762\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0153\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 494\n",
+ " - prompt_tokens: 8268\n",
+ " - total_tokens: 8762\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0153\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 31%|████████████----------------------------| 2287/7340 [78:17<172:59, 29.2 steps/min]2025-08-11 16:44:36,616 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 666, 'scroll_x': 0, 'x': 336, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 666, 'scroll_x': 0, 'x': 336, 'y': 152})\n",
+ "2025-08-11 16:44:37,280 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 437})\n",
+ "\u001b[92m16:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:44:37,941 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:38,620 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 304})\n",
+ "2025-08-11 16:44:39,286 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -200, 'scroll_x': 0, 'x': 589, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -200, 'scroll_x': 0, 'x': 589, 'y': 128})\n",
+ " 31%|████████████----------------------------| 2289/7340 [78:21<172:53, 29.2 steps/min]\u001b[92m16:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:39,956 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 153})\n",
+ "2025-08-11 16:44:40,591 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:41,646 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:44:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 31%|████████████----------------------------| 2293/7340 [78:23<172:32, 29.3 steps/min]2025-08-11 16:44:42,285 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:44:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:42,973 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:44:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:44,693 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 31%|████████████----------------------------| 2294/7340 [78:26<172:32, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:45,371 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2311/7340 [78:27<170:43, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5180ec6f-26a5-4ab4-8ca3-87f128083da1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 31%|████████████----------------------------| 2311/7340 [78:30<170:49, 29.4 steps/min]2025-08-11 16:44:49,145 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:44:49,827 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:50,495 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:44:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fd628f34-1346-4947-bfa4-cf698adb3472/close \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2321/7340 [78:32<169:49, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:33<169:45, 29.6 steps/min]2025-08-11 16:44:53,297 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:53,963 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:44:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:36<169:52, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:44:55,334 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:44:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:37<169:55, 29.5 steps/min]2025-08-11 16:44:56,654 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:57,522 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.83s/it]\u001b[92m16:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:44:58,177 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:40<170:02, 29.5 steps/min]\u001b[92m16:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:00,039 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:45:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]29.5 steps/min]\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:44<170:10, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:45:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2322/7340 [78:45<170:13, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:45:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:45:04,841 - agent.ComputerAgent - INFO - Computer: click({'x': 279, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 279, 'y': 156})\n",
+ "\u001b[92m16:45:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:45:05,512 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 20})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 20})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:45:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2323/7340 [78:47<170:11, 29.5 steps/min]2025-08-11 16:45:06,848 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 165})\n",
+ "2025-08-11 16:45:07,492 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 437})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 32%|████████████----------------------------| 2325/7340 [78:49<170:02, 29.5 steps/min]\u001b[92m16:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:09,447 - agent.ComputerAgent - INFO - Computer: click({'x': 377, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 377, 'y': 77})\n",
+ "\u001b[92m16:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 32%|████████████----------------------------| 2327/7340 [78:51<169:53, 29.5 steps/min]\u001b[92m16:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:10,760 - agent.ComputerAgent - INFO - Computer: double_click({'x': 621, 'y': 479})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 621, 'y': 479})\n",
+ "\u001b[92m16:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:45:11,435 - agent.ComputerAgent - INFO - Computer: click({'x': 806, 'y': 691})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 806, 'y': 691})\n",
+ "\u001b[92m16:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:12,755 - agent.ComputerAgent - INFO - Computer: type({'text': 'vim ~/.vimrc'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'vim ~/.vimrc'})\n",
+ " 32%|████████████----------------------------| 2328/7340 [78:54<169:52, 29.5 steps/min]2025-08-11 16:45:13,402 - agent.ComputerAgent - INFO - Computer: click({'x': 456, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 456, 'y': 276})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:45:15,372 - agent.ComputerAgent - INFO - Computer: type({'text': 'pgrep -a -f soffice\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'pgrep -a -f soffice\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2331/7340 [78:57<169:40, 29.5 steps/min]\u001b[92m16:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:17,363 - agent.ComputerAgent - INFO - Computer: click({'x': 710, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 710, 'y': 380})\n",
+ "\u001b[92m16:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:18,687 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -600, 'scroll_x': 0, 'x': 636, 'y': 361})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -600, 'scroll_x': 0, 'x': 636, 'y': 361})\n",
+ " 32%|████████████----------------------------| 2333/7340 [79:00<169:33, 29.5 steps/min]\u001b[92m16:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:45:19,354 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 663, 'scroll_x': 0, 'x': 336, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 663, 'scroll_x': 0, 'x': 336, 'y': 116})\n",
+ "\u001b[92m16:45:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:45:19,988 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 192})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 192})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2335/7340 [79:01<169:23, 29.5 steps/min]2025-08-11 16:45:20,634 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:45:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:21,297 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:45:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2337/7340 [79:03<169:13, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:21,977 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:45:22,624 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2337/7340 [79:04<169:16, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2337/7340 [79:05<169:18, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2337/7340 [79:06<169:20, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:25,326 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:45:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:25,998 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:45:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:27,372 - agent.ComputerAgent - INFO - Computer: type({'text': '.pptx'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '.pptx'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2337/7340 [79:09<169:26, 29.5 steps/min]2025-08-11 16:45:28,035 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:45:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:45:28,688 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:45:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2338/7340 [79:10<169:23, 29.5 steps/min]2025-08-11 16:45:29,365 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:45:30,049 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:45:30,699 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2338/7340 [79:12<169:27, 29.5 steps/min]2025-08-11 16:45:32,248 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ea9e43cc-3d54-4c89-bb53-a189a3ae9a25/close \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2338/7340 [79:14<169:30, 29.5 steps/min]2025-08-11 16:45:33,568 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:45:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2338/7340 [79:15<169:33, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:36,196 - agent.ComputerAgent - INFO - Computer: type({'text': 'corresponding author'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'corresponding author'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2338/7340 [79:17<169:39, 29.5 steps/min]2025-08-11 16:45:36,852 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:45:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2339/7340 [79:18<169:35, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:39,085 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 32%|████████████----------------------------| 2340/7340 [79:22<169:36, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:42,305 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:45:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2340/7340 [79:24<169:41, 29.5 steps/min]\u001b[92m16:45:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 32%|████████████----------------------------| 2340/7340 [79:25<169:43, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed3b76a0-7eeb-473c-8da1-310545515f47/close \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2340/7340 [79:27<169:46, 29.5 steps/min]2025-08-11 16:45:46,146 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2340/7340 [79:30<169:52, 29.4 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.43s/it]\n",
+ " 32%|████████████----------------------------| 2341/7340 [79:31<169:48, 29.4 steps/min]INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 32%|████████████----------------------------| 2341/7340 [79:32<169:50, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:45:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:45:52,857 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:09, 3.10s/it]2025-08-11 16:45:53,481 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2341/7340 [79:35<169:57, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:45:54,126 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:45:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2341/7340 [79:37<170:01, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:06<00:06, 3.11s/it]2025-08-11 16:45:57,094 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00, 2.21s/it]\n",
+ " 32%|████████████----------------------------| 2341/7340 [79:39<170:06, 29.4 steps/min]\u001b[92m16:45:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:08<00:02, 2.88s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.44s/it]\n",
+ "2025-08-11 16:45:59,968 - agent.ComputerAgent - INFO - Agent: I’ve force-quit LibreOffice Writer from the command line.\n",
+ "\n",
+ "What I did:\n",
+ "- Opened Terminal and ran: pkill -f soffice.bin\n",
+ "- Verified there are no remaining LibreOffice (soffice) processes.\n",
+ "\n",
+ "Notes for future use:\n",
+ "- Graceful stop: pkill -f soffice.bin\n",
+ "- If it still won’t die: kill -9 $(pgrep -f soffice.bin) or killall -9 soffice.bin\n",
+ "- LibreOffice should offer to recover your document the next time you launch it.\n",
+ "\n",
+ "Your other applications remain unaffected.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I’ve force-quit LibreOffice Writer from the command line.\n",
+ "\n",
+ "What I did:\n",
+ "- Opened Terminal and ran: pkill -f soffice.bin\n",
+ "- Verified there are no remaining LibreOffice (soffice) processes.\n",
+ "\n",
+ "Notes for future use:\n",
+ "- Graceful stop: pkill -f soffice.bin\n",
+ "- If it still won’t die: kill -9 $(pgrep -f soffice.bin) or killall -9 soffice.bin\n",
+ "- LibreOffice should offer to recover your document the next time you launch it.\n",
+ "\n",
+ "Your other applications remain unaffected.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:46:00,866 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 639\n",
+ " - prompt_tokens: 6051\n",
+ " - total_tokens: 6690\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0140\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 639\n",
+ " - prompt_tokens: 6051\n",
+ " - total_tokens: 6690\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0140\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:02,444 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 32%|████████████----------------------------| 2345/7340 [79:44<169:50, 29.4 steps/min]2025-08-11 16:46:03,117 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:46:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.44s/it]29.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:06,018 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2346/7340 [79:47<169:51, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:06,696 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:08,018 - agent.ComputerAgent - INFO - Computer: type({'text': ':q'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ':q'})\n",
+ "2025-08-11 16:46:08,682 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 532})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:46:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:46:09,983 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 32%|████████████----------------------------| 2346/7340 [79:51<170:00, 29.4 steps/min]2025-08-11 16:46:10,633 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 682, 'scroll_x': 0, 'x': 336, 'y': 374})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 682, 'scroll_x': 0, 'x': 336, 'y': 374})\n",
+ "2025-08-11 16:46:11,288 - agent.ComputerAgent - INFO - Computer: click({'x': 827, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 827, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:11,912 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 32%|████████████----------------------------| 2348/7340 [79:53<169:51, 29.4 steps/min]\u001b[92m16:46:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:46:12,576 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:46:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2350/7340 [79:54<169:41, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:46:14,427 - agent.ComputerAgent - INFO - Computer: click({'x': 983, 'y': 435})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 983, 'y': 435})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2366/7340 [79:57<168:05, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/74442f45-62aa-40d1-9499-ea3e8e0a4c18/close \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2366/7340 [79:58<168:07, 29.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2366/7340 [79:59<168:09, 29.6 steps/min]2025-08-11 16:46:18,260 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:46:18,933 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2366/7340 [80:00<168:12, 29.6 steps/min]2025-08-11 16:46:19,604 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:46:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:22,011 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2366/7340 [80:03<168:18, 29.6 steps/min]2025-08-11 16:46:22,660 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:46:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:46:23,346 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:46:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:25,291 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.98s/it]2025-08-11 16:46:25,958 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 459\n",
+ " - prompt_tokens: 6738\n",
+ " - total_tokens: 7197\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0130\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 459\n",
+ " - prompt_tokens: 6738\n",
+ " - total_tokens: 7197\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0130\n",
+ " 32%|████████████----------------------------| 2367/7340 [80:07<168:20, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.74s/it]\u001b[92m16:46:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:28,432 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.66s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "2025-08-11 16:46:29,922 - agent.ComputerAgent - INFO - Computer: type({'text': '140'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '140'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 32%|████████████----------------------------| 2367/7340 [80:12<168:30, 29.5 steps/min]\u001b[92m16:46:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:31,217 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:31,916 - agent.ComputerAgent - INFO - Computer: click({'x': 914, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 914, 'y': 203})\n",
+ "\u001b[92m16:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 32%|████████████----------------------------| 2369/7340 [80:13<168:20, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:46:32,889 - agent.ComputerAgent - INFO - Computer: click({'x': 392, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 392, 'y': 275})\n",
+ "\u001b[92m16:46:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:46:34,224 - agent.ComputerAgent - INFO - Computer: type({'text': '@', 'x': 990, 'y': 17})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '@', 'x': 990, 'y': 17})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2371/7340 [80:15<168:12, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:46:34,854 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:46:35,539 - agent.ComputerAgent - INFO - Computer: double_click({'x': 224, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 224, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ " 32%|████████████----------------------------| 2373/7340 [80:18<168:05, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:46:37,226 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:46:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2373/7340 [80:20<168:09, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:39,397 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 32%|████████████----------------------------| 2385/7340 [80:21<166:56, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a8ddfa4f-6f9b-4ad8-b763-1881394c9926/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:41,443 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/81b23870-39ed-4649-9729-1d4809f713ec/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2386/7340 [80:23<166:55, 29.7 steps/min]2025-08-11 16:46:42,757 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:43,409 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:46:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2386/7340 [80:25<166:58, 29.7 steps/min]2025-08-11 16:46:44,057 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:46:44,745 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2386/7340 [80:26<167:01, 29.7 steps/min]2025-08-11 16:46:45,392 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:46:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2395/7340 [80:27<166:07, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/71840850-9565-4ed2-8fa2-e4f2ba6ec6a9/close \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2395/7340 [80:28<166:10, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:46:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n",
+ " 33%|█████████████---------------------------| 2395/7340 [80:29<166:12, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:46:49,268 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:46:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2395/7340 [80:31<166:14, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2395/7340 [80:32<166:16, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 33%|█████████████---------------------------| 2396/7340 [80:33<166:12, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2396/7340 [80:34<166:14, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:05<00:05, 2.59s/it]2025-08-11 16:46:52,908 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:46:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.03s/it]29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.27s/it]\n",
+ " 33%|█████████████---------------------------| 2397/7340 [80:39<166:18, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:10<00:00, 2.58s/it]\n",
+ "2025-08-11 16:46:58,866 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 33%|█████████████---------------------------| 2397/7340 [80:41<166:23, 29.7 steps/min]\u001b[92m16:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:00,318 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:47:00,320 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:01,624 - agent.ComputerAgent - INFO - Computer: type({'text': 'bash -lc \\'printf \"line a\\\\nline b\\\\nline c\\\\n\" > ~/vim_test.txt && vim ~/vim_test.txt\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'bash -lc \\'printf \"line a\\\\nline b\\\\nline c\\\\n\" > ~/vim_test.txt && vim ~/vim_test.txt\\''})\n",
+ " 33%|█████████████---------------------------| 2397/7340 [80:43<166:27, 29.7 steps/min]\u001b[92m16:47:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:02,258 - agent.ComputerAgent - INFO - Computer: click({'x': 855, 'y': 476})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 855, 'y': 476})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2399/7340 [80:44<166:18, 29.7 steps/min]\u001b[92m16:47:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:03,544 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 604, 'scroll_x': 0, 'x': 307, 'y': 666})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 604, 'scroll_x': 0, 'x': 307, 'y': 666})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 33%|█████████████---------------------------| 2401/7340 [80:46<166:09, 29.7 steps/min]\u001b[92m16:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:05,490 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:47:05,491 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 465, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 465, 'y': 294})\n",
+ " 33%|█████████████---------------------------| 2402/7340 [80:47<166:05, 29.7 steps/min]\u001b[92m16:47:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:06,659 - agent.ComputerAgent - INFO - Computer: click({'x': 637, 'y': 471})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 637, 'y': 471})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:07,327 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:47:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2403/7340 [80:49<166:02, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:08,385 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 270})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2404/7340 [80:50<165:58, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:09,003 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:47:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:09,698 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 530, 'scroll_x': 0, 'x': 574, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 530, 'scroll_x': 0, 'x': 574, 'y': 736})\n",
+ "2025-08-11 16:47:10,366 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 33%|█████████████---------------------------| 2406/7340 [80:52<165:50, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:11,392 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 164})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2407/7340 [80:53<165:46, 29.8 steps/min]2025-08-11 16:47:12,011 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:12,685 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2408/7340 [80:55<165:44, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:14,028 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:47:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:14,693 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:47:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2408/7340 [80:57<165:48, 29.7 steps/min]\u001b[92m16:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:16,409 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 75})\n",
+ "2025-08-11 16:47:17,070 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:18,388 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:47:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2409/7340 [81:00<165:48, 29.7 steps/min]2025-08-11 16:47:19,055 - agent.ComputerAgent - INFO - Computer: click({'x': 458, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 458, 'y': 275})\n",
+ "2025-08-11 16:47:19,739 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:47:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2410/7340 [81:01<165:44, 29.7 steps/min]2025-08-11 16:47:20,387 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:21,068 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:47:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2411/7340 [81:02<165:41, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:22,362 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:47:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:23,717 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 33%|█████████████---------------------------| 2411/7340 [81:05<165:46, 29.7 steps/min]\u001b[92m16:47:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:24,730 - agent.ComputerAgent - INFO - Computer: double_click({'x': 331, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 331, 'y': 111})\n",
+ " 33%|█████████████---------------------------| 2412/7340 [81:06<165:42, 29.7 steps/min]2025-08-11 16:47:25,410 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:47:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2413/7340 [81:07<165:38, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:26,610 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:27,269 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:47:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2413/7340 [81:09<165:43, 29.7 steps/min]\u001b[92m16:47:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:47:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:29,307 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:47:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2414/7340 [81:11<165:39, 29.7 steps/min]\u001b[92m16:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:30,494 - agent.ComputerAgent - INFO - Computer: click({'x': 946, 'y': 738})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 946, 'y': 738})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2414/7340 [81:12<165:42, 29.7 steps/min]2025-08-11 16:47:31,166 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:31,814 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2415/7340 [81:13<165:39, 29.7 steps/min]2025-08-11 16:47:32,487 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 294})\n",
+ "2025-08-11 16:47:33,177 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:47:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2415/7340 [81:14<165:41, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2416/7340 [81:15<165:37, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2416/7340 [81:16<165:39, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2416/7340 [81:17<165:41, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2417/7340 [81:19<165:37, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:38,035 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:38,680 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2417/7340 [81:20<165:40, 29.7 steps/min]2025-08-11 16:47:39,336 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:47:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:41,078 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:47:41,079 - agent.ComputerAgent - INFO - Computer: click({'x': 92, 'y': 359})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 92, 'y': 359})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2417/7340 [81:23<165:46, 29.7 steps/min]\u001b[92m16:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 33%|█████████████---------------------------| 2418/7340 [81:24<165:42, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:47:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:44,087 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 760})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 33%|█████████████---------------------------| 2419/7340 [81:25<165:39, 29.7 steps/min]2025-08-11 16:47:44,730 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:47:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:46,038 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 283, 'y': 664})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 283, 'y': 664})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 33%|█████████████---------------------------| 2420/7340 [81:28<165:38, 29.7 steps/min]\u001b[92m16:47:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:47:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:47,983 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 355})\n",
+ "\u001b[92m16:47:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2421/7340 [81:29<165:34, 29.7 steps/min]2025-08-11 16:47:48,672 - agent.ComputerAgent - INFO - Computer: click({'x': 962, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 962, 'y': 234})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:49,341 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2422/7340 [81:31<165:31, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:50,011 - agent.ComputerAgent - INFO - Computer: click({'x': 392, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 392, 'y': 275})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:51,361 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:47:51,362 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 33%|█████████████---------------------------| 2423/7340 [81:33<165:29, 29.7 steps/min]\u001b[92m16:47:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:52,058 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:47:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:52,742 - agent.ComputerAgent - INFO - Computer: click({'x': 196, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 196, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2425/7340 [81:35<165:21, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2426/7340 [81:36<165:17, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:47:55,312 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 294})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:47:55,965 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:47:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:47:57,620 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2427/7340 [81:39<165:17, 29.7 steps/min]\u001b[92m16:47:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:47:58,279 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:47:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:47:58,928 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2429/7340 [81:40<165:08, 29.7 steps/min]2025-08-11 16:47:59,585 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:47:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:00,942 - agent.ComputerAgent - INFO - Computer: type({'text': ':q'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ':q'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:48:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:02,299 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2430/7340 [81:44<165:09, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:48:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2431/7340 [81:45<165:04, 29.7 steps/min]2025-08-11 16:48:03,692 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:48:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:04,331 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2431/7340 [81:46<165:07, 29.7 steps/min]\u001b[92m16:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:04,993 - agent.ComputerAgent - INFO - Computer: click({'x': 458, 'y': 422})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 458, 'y': 422})\n",
+ "2025-08-11 16:48:05,658 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2431/7340 [81:47<165:09, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:07,064 - agent.ComputerAgent - INFO - Computer: click({'x': 474, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 474, 'y': 332})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2436/7340 [81:48<164:42, 29.8 steps/min]2025-08-11 16:48:07,712 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:48:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:48:08,382 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4054e85-5304-43a3-b6d7-128e302780cb/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:50<164:32, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:09,668 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:48:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:51<164:35, 29.8 steps/min]2025-08-11 16:48:10,320 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:48:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:52<164:37, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/reset \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:53<164:39, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:12,541 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:48:13,223 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:48:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:54<164:42, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:14,391 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:48:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2438/7340 [81:56<164:44, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:48:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2440/7340 [81:57<164:35, 29.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3e4ea7d7-21a2-4b07-abd4-a3e280e44e0b/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]164:37, 29.8 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:18,050 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+right'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]29.8 steps/min]2025-08-11 16:48:19,222 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:48:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2440/7340 [82:01<164:44, 29.7 steps/min]\u001b[92m16:48:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:48:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2440/7340 [82:03<164:46, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:02, 2.08s/it]2025-08-11 16:48:23,175 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.60s/it]\n",
+ " 33%|█████████████---------------------------| 2440/7340 [82:05<164:52, 29.7 steps/min]\u001b[92m16:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2441/7340 [82:06<164:48, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.96s/it]29.7 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2441/7340 [82:11<164:57, 29.7 steps/min]\u001b[92m16:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:30,467 - agent.ComputerAgent - INFO - Computer: click({'x': 809, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 809, 'y': 35})\n",
+ "2025-08-11 16:48:31,115 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2441/7340 [82:13<165:01, 29.7 steps/min]\u001b[92m16:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:32,488 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 757})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2442/7340 [82:14<164:58, 29.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 33%|█████████████---------------------------| 2443/7340 [82:15<164:54, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:35,995 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'right'})\n",
+ " 33%|█████████████---------------------------| 2443/7340 [82:17<164:57, 29.7 steps/min]\u001b[92m16:48:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:36,630 - agent.ComputerAgent - INFO - Computer: click({'x': 410, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 410, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:48:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2444/7340 [82:19<164:54, 29.7 steps/min]\u001b[92m16:48:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:38,032 - agent.ComputerAgent - INFO - Computer: right_click({'x': 346, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: right_click({'x': 346, 'y': 88})\n",
+ "2025-08-11 16:48:38,033 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: right_click\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "\u001b[92m16:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2445/7340 [82:20<164:51, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:39,316 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:48:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:48:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:40,022 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ " 33%|█████████████---------------------------| 2445/7340 [82:21<164:53, 29.7 steps/min]\u001b[92m16:48:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:41,205 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 331})\n",
+ " 33%|█████████████---------------------------| 2447/7340 [82:23<164:45, 29.7 steps/min]\u001b[92m16:48:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:42,867 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 306})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:44,171 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2447/7340 [82:25<164:49, 29.7 steps/min]\u001b[92m16:48:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:44,804 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 452, 'y': 709}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 452, 'y': 709}]})\n",
+ "2025-08-11 16:48:45,491 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:48:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 33%|█████████████---------------------------| 2449/7340 [82:27<164:40, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:48:46,152 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:48:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:48:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:48:46,833 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 362})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2450/7340 [82:28<164:36, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:48,011 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:48:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:48:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7e0b0038-3a97-4d93-8c5c-154cc0b95af9/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 33%|█████████████---------------------------| 2454/7340 [82:29<164:15, 29.7 steps/min]2025-08-11 16:48:49,015 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:48:49,017 - agent.ComputerAgent - INFO - Computer: click({'x': 801, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 801, 'y': 739})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 33%|█████████████---------------------------| 2455/7340 [82:31<164:12, 29.7 steps/min]2025-08-11 16:48:50,329 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:48:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2456/7340 [82:32<164:08, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:48:51,487 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:48:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2456/7340 [82:33<164:10, 29.7 steps/min]2025-08-11 16:48:52,161 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:48:52,843 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:54,136 - agent.ComputerAgent - INFO - Agent: I found a clear tutorial on enabling absolute line numbers in Vim:\n",
+ "- Vim’s absolute, relative and hybrid line numbers by Jeff Kreeftmeijer: https://jeffkreeftmeijer.com/vim-number/\n",
+ "\n",
+ "What I configured on your machine:\n",
+ "- Added this to your ~/.vimrc so absolute line numbers are enabled by default:\n",
+ " - Enable absolute line numbers by default\n",
+ " - set number\n",
+ "- Verified by opening a test file in Vim; absolute line numbers appeared in the left gutter.\n",
+ "\n",
+ "Notes:\n",
+ "- Temporarily toggle in a Vim session if needed:\n",
+ " - Turn off: :set nonumber\n",
+ " - Turn on: :set number\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I found a clear tutorial on enabling absolute line numbers in Vim:\n",
+ "- Vim’s absolute, relative and hybrid line numbers by Jeff Kreeftmeijer: https://jeffkreeftmeijer.com/vim-number/\n",
+ "\n",
+ "What I configured on your machine:\n",
+ "- Added this to your ~/.vimrc so absolute line numbers are enabled by default:\n",
+ " - Enable absolute line numbers by default\n",
+ " - set number\n",
+ "- Verified by opening a test file in Vim; absolute line numbers appeared in the left gutter.\n",
+ "\n",
+ "Notes:\n",
+ "- Temporarily toggle in a Vim session if needed:\n",
+ " - Turn off: :set nonumber\n",
+ " - Turn on: :set number\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:48:54,765 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 593\n",
+ " - prompt_tokens: 13533\n",
+ " - total_tokens: 14126\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 12160\n",
+ " - response_cost: $0.0092\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 593\n",
+ " - prompt_tokens: 13533\n",
+ " - total_tokens: 14126\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 12160\n",
+ " - response_cost: $0.0092\n",
+ " 33%|█████████████---------------------------| 2457/7340 [82:36<164:10, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:56,086 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ " 33%|█████████████---------------------------| 2457/7340 [82:37<164:13, 29.7 steps/min]2025-08-11 16:48:56,739 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:48:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:48:57,398 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:48:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:48:58,698 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 33%|█████████████---------------------------| 2458/7340 [82:40<164:12, 29.7 steps/min]2025-08-11 16:48:59,351 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:48:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2459/7340 [82:41<164:08, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 34%|█████████████---------------------------| 2460/7340 [82:42<164:04, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2460/7340 [82:43<164:06, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/475030e1-7ae3-4ced-91fb-2221d956a2ad/close \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2462/7340 [82:44<163:56, 29.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2462/7340 [82:45<163:58, 29.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2c254802-788e-4b4b-98dc-68cd2c6bcce4/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:06,109 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+right'})\n",
+ " 34%|█████████████---------------------------| 2462/7340 [82:47<164:02, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:07,406 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "2025-08-11 16:49:08,049 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 34%|█████████████---------------------------| 2462/7340 [82:49<164:06, 29.7 steps/min]\u001b[92m16:49:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:49:08,722 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:49:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:09,373 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2463/7340 [82:52<164:05, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]164:07, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]29.7 steps/min]2025-08-11 16:49:13,513 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:49:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2b43eb21-4025-495a-8c66-358bfcac034b/reset \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2463/7340 [82:55<164:12, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f73836c4-d8e3-425b-a750-f2319c89164e/reset \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2463/7340 [82:56<164:14, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:16,106 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:49:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 16:49:17,312 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2463/7340 [82:59<164:19, 29.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2463/7340 [83:00<164:21, 29.7 steps/min]\u001b[92m16:49:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:49:19,008 - agent.ComputerAgent - INFO - Computer: click({'x': 888, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 888, 'y': 168})\n",
+ "\u001b[92m16:49:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:19,667 - agent.ComputerAgent - INFO - Computer: click({'x': 304, 'y': 448})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 304, 'y': 448})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2464/7340 [83:02<164:19, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:49:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:49:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:22,248 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 277})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2465/7340 [83:04<164:18, 29.7 steps/min]\u001b[92m16:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:49:23,554 - agent.ComputerAgent - INFO - Computer: double_click({'x': 46, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 46, 'y': 93})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:49:26,243 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:49:26,244 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 34%|█████████████---------------------------| 2466/7340 [83:07<164:18, 29.7 steps/min]2025-08-11 16:49:26,930 - agent.ComputerAgent - INFO - Computer: double_click({'x': 627, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 627, 'y': 483})\n",
+ "2025-08-11 16:49:27,562 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:49:27,562 - agent.ComputerAgent - INFO - Computer: move({'x': 19, 'y': 391})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 19, 'y': 391})\n",
+ "\u001b[92m16:49:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:49:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:28,211 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 34%|█████████████---------------------------| 2467/7340 [83:09<164:16, 29.7 steps/min]\u001b[92m16:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:49:29,385 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 304, 'y': 661}, {'x': 574, 'y': 737}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 304, 'y': 661}, {'x': 574, 'y': 737}]})\n",
+ " 34%|█████████████---------------------------| 2469/7340 [83:11<164:06, 29.7 steps/min]\u001b[92m16:49:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:30,058 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 457, 'y': 712}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 457, 'y': 712}]})\n",
+ " 34%|█████████████---------------------------| 2471/7340 [83:14<164:00, 29.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:33,822 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:49:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2471/7340 [83:15<164:03, 29.7 steps/min]2025-08-11 16:49:34,508 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:49:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:49:35,189 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:49:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2471/7340 [83:16<164:06, 29.7 steps/min]2025-08-11 16:49:35,838 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:49:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:37,544 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 34%|█████████████---------------------------| 2471/7340 [83:19<164:10, 29.7 steps/min]2025-08-11 16:49:38,553 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:49:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2472/7340 [83:20<164:06, 29.7 steps/min]2025-08-11 16:49:39,232 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:49:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:39,912 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:49:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2472/7340 [83:21<164:09, 29.7 steps/min]2025-08-11 16:49:40,562 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:49:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2472/7340 [83:22<164:11, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:49:41,755 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:49:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2472/7340 [83:24<164:14, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:49:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:43,568 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 101})\n",
+ " 34%|█████████████---------------------------| 2473/7340 [83:30<164:20, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:50,013 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:49:51,362 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 34%|█████████████---------------------------| 2473/7340 [83:33<164:26, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/reset \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:49:52,015 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:49:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:49:52,658 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:49:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2474/7340 [83:34<164:22, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2474/7340 [83:35<164:24, 29.6 steps/min]\u001b[92m16:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:54,532 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 628})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 628})\n",
+ " 34%|█████████████---------------------------| 2474/7340 [83:36<164:26, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2475/7340 [83:37<164:22, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:49:56,939 - agent.ComputerAgent - INFO - Computer: click({'x': 808, 'y': 505})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 808, 'y': 505})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2475/7340 [83:38<164:24, 29.6 steps/min]2025-08-11 16:49:57,564 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:49:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:49:58,217 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2476/7340 [83:39<164:21, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:49:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2476/7340 [83:41<164:23, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:00,733 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 725})\n",
+ " 34%|█████████████---------------------------| 2476/7340 [83:42<164:26, 29.6 steps/min]\u001b[92m16:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:01,761 - agent.ComputerAgent - INFO - Computer: click({'x': 331, 'y': 112})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 331, 'y': 112})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:03,095 - agent.ComputerAgent - INFO - Computer: type({'text': 'WEEKDAY(B3;2)>5'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'WEEKDAY(B3;2)>5'})\n",
+ " 34%|█████████████---------------------------| 2477/7340 [83:44<164:25, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2479/7340 [83:46<164:15, 29.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:05,091 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 294})\n",
+ "\u001b[92m16:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:05,754 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2479/7340 [83:48<164:19, 29.6 steps/min]\u001b[92m16:50:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:07,118 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 351, 'y': 709}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 471, 'y': 328}, {'x': 351, 'y': 709}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:08,416 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 188})\n",
+ "2025-08-11 16:50:09,045 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:50:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2480/7340 [83:50<164:18, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:09,730 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:50:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:50:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:10,446 - agent.ComputerAgent - INFO - Computer: double_click({'x': 618, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 618, 'y': 483})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2482/7340 [83:52<164:09, 29.6 steps/min]2025-08-11 16:50:11,451 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:50:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2483/7340 [83:53<164:05, 29.6 steps/min]2025-08-11 16:50:12,123 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:13,563 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2483/7340 [83:55<164:09, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/514e0362-c0b3-4216-989f-d260ec405efb/reset \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2483/7340 [83:56<164:11, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:15,740 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:50:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2483/7340 [83:57<164:13, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:17,970 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:50:17,970 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2483/7340 [84:00<164:19, 29.6 steps/min]\u001b[92m16:50:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:19,965 - agent.ComputerAgent - INFO - Computer: type({'text': 'Orchis theme gnome-look'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Orchis theme gnome-look'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:21,285 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2483/7340 [84:03<164:25, 29.5 steps/min]\u001b[92m16:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:22,607 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:23,283 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 626, 'scroll_x': 0, 'x': 588, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 626, 'scroll_x': 0, 'x': 588, 'y': 446})\n",
+ "\u001b[92m16:50:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:23,909 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:50:23,909 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 713})\n",
+ " 34%|█████████████---------------------------| 2484/7340 [84:05<164:23, 29.5 steps/min]2025-08-11 16:50:24,578 - agent.ComputerAgent - INFO - Computer: click({'x': 412, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 412, 'y': 128})\n",
+ "2025-08-11 16:50:25,256 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:50:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2486/7340 [84:07<164:14, 29.6 steps/min]2025-08-11 16:50:25,924 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:50:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:26,594 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:50:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2487/7340 [84:09<164:12, 29.6 steps/min]\u001b[92m16:50:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:50:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:28,422 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:29,722 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 34%|█████████████---------------------------| 2487/7340 [84:11<164:17, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:30,365 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:50:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2489/7340 [84:13<164:09, 29.6 steps/min]\u001b[92m16:50:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:50:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:33,742 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "\u001b[92m16:50:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2489/7340 [84:16<164:14, 29.5 steps/min]\u001b[92m16:50:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:35,048 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:50:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:35,696 - agent.ComputerAgent - INFO - Computer: move({'x': 887, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 887, 'y': 167})\n",
+ "2025-08-11 16:50:36,379 - agent.ComputerAgent - INFO - Computer: click({'x': 260, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 260, 'y': 101})\n",
+ "\u001b[92m16:50:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:37,041 - agent.ComputerAgent - INFO - Computer: click({'x': 537, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 537, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:38,373 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'right'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2490/7340 [84:20<164:16, 29.5 steps/min]\u001b[92m16:50:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:39,696 - agent.ComputerAgent - INFO - Computer: type({'text': 'Mumbai'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Mumbai'})\n",
+ "2025-08-11 16:50:40,392 - agent.ComputerAgent - INFO - Computer: click({'x': 746, 'y': 651})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 746, 'y': 651})\n",
+ " 34%|█████████████---------------------------| 2494/7340 [84:22<163:56, 29.6 steps/min]2025-08-11 16:50:41,039 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:50:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "2025-08-11 16:50:41,701 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:50:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:42,355 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:50:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:24<163:47, 29.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:43,043 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:50:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:25<163:49, 29.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:50:45,382 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:27<163:53, 29.6 steps/min]2025-08-11 16:50:46,040 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:50:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:46,708 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:50:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:47,380 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:50:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:29<163:57, 29.5 steps/min]2025-08-11 16:50:48,016 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:50:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:49,339 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:50:49,340 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:31<164:01, 29.5 steps/min]2025-08-11 16:50:49,998 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:50:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:51,356 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:50:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:34<164:08, 29.5 steps/min]\u001b[92m16:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:50:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:54,063 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 398, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 398, 'y': 89})\n",
+ "\u001b[92m16:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:54,739 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ " 34%|█████████████---------------------------| 2496/7340 [84:36<164:11, 29.5 steps/min]2025-08-11 16:50:55,391 - agent.ComputerAgent - INFO - Computer: click({'x': 793, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 793, 'y': 41})\n",
+ "\u001b[92m16:50:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:56,056 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:50:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:50:56,694 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2498/7340 [84:39<164:06, 29.5 steps/min]\u001b[92m16:50:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:50:58,732 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:50:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:50:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:50:59,814 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:50:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:51:02,000 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'right'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2500/7340 [84:43<164:02, 29.5 steps/min]2025-08-11 16:51:02,679 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:51:02,680 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:04,082 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2502/7340 [84:46<163:55, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:05,471 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:06,153 - agent.ComputerAgent - INFO - Computer: double_click({'x': 615, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 615, 'y': 483})\n",
+ "\u001b[92m16:51:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2503/7340 [84:47<163:52, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:51:06,824 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 483, 'y': 328}, {'x': 411, 'y': 711}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 483, 'y': 328}, {'x': 411, 'y': 711}]})\n",
+ " 34%|█████████████---------------------------| 2504/7340 [84:48<163:48, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/029e3386-8f59-44a0-9dba-b14a4d9825ef/close \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:49<163:44, 29.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:50<163:46, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:51<163:48, 29.5 steps/min]2025-08-11 16:51:11,251 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:51:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:53<163:50, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:11,946 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:51:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:12,584 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:51:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:54<163:52, 29.5 steps/min]2025-08-11 16:51:13,334 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:51:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:13,968 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:51:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:55<163:55, 29.5 steps/min]2025-08-11 16:51:14,650 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:51:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:15,336 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:51:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:57<163:58, 29.5 steps/min]2025-08-11 16:51:15,998 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:51:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:16,677 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:51:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:58<164:00, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2505/7340 [84:59<164:02, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/reset \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2505/7340 [85:00<164:04, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:20,733 - agent.ComputerAgent - INFO - Computer: type({'text': 'Stockholm'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Stockholm'})\n",
+ " 34%|█████████████---------------------------| 2505/7340 [85:02<164:08, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:51:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2506/7340 [85:04<164:06, 29.5 steps/min]\u001b[92m16:51:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m16:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:25,182 - agent.ComputerAgent - INFO - Computer: type({'text': 'Orchis theme site:gnome-look.org'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'Orchis theme site:gnome-look.org'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 16:51:27,403 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 34%|█████████████---------------------------| 2507/7340 [85:09<164:09, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:29,765 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ff8b808f-c3a6-4979-8f9a-c6a25905116c/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2508/7340 [85:11<164:08, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:30,795 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 326})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 326})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2508/7340 [85:13<164:12, 29.4 steps/min]\u001b[92m16:51:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:32,772 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 101})\n",
+ "2025-08-11 16:51:33,473 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 60})\n",
+ "2025-08-11 16:51:34,102 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 133})\n",
+ "\u001b[92m16:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:51:34,763 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 41})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2509/7340 [85:16<164:11, 29.4 steps/min]\u001b[92m16:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:51:35,413 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 257})\n",
+ "2025-08-11 16:51:36,301 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 294})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]\u001b[92m16:51:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]\u001b[92m16:51:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2513/7340 [85:19<163:54, 29.5 steps/min]2025-08-11 16:51:39,053 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:20<163:44, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]\u001b[92m16:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 16:51:40,328 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:22<163:46, 29.5 steps/min]2025-08-11 16:51:41,372 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:51:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:23<163:48, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:24<163:50, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:43,559 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:25<163:52, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:44,609 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 96, 'y': 185}, {'x': 91, 'y': 322}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 96, 'y': 185}, {'x': 91, 'y': 322}]})\n",
+ "\u001b[92m16:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2515/7340 [85:26<163:54, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:45,307 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 428})\n",
+ "2025-08-11 16:51:45,960 - agent.ComputerAgent - INFO - Computer: double_click({'x': 611, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 611, 'y': 483})\n",
+ "\u001b[92m16:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2516/7340 [85:27<163:51, 29.4 steps/min]2025-08-11 16:51:46,615 - agent.ComputerAgent - INFO - Computer: click({'x': 274, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 274, 'y': 321})\n",
+ "2025-08-11 16:51:47,259 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 526, 'y': 326}, {'x': 415, 'y': 737}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 526, 'y': 326}, {'x': 415, 'y': 737}]})\n",
+ "2025-08-11 16:51:47,877 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:51:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:48,508 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:51:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2518/7340 [85:30<163:44, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:51:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:50,895 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 34%|█████████████---------------------------| 2520/7340 [85:32<163:37, 29.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:51:51,549 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:51:52,200 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:51:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:51:52,837 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:51:52,838 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 204})\n",
+ "2025-08-11 16:51:53,465 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2521/7340 [85:35<163:36, 29.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2522/7340 [85:37<163:34, 29.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:56,658 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:51:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2522/7340 [85:38<163:36, 29.4 steps/min]2025-08-11 16:51:57,339 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2522/7340 [85:39<163:38, 29.4 steps/min]2025-08-11 16:51:58,507 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:51:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:51:59,554 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:51:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2522/7340 [85:41<163:41, 29.4 steps/min]2025-08-11 16:52:00,229 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:01,573 - agent.ComputerAgent - INFO - Computer: type({'text': 'A1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'A1'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2522/7340 [85:44<163:47, 29.4 steps/min]\u001b[92m16:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:03,248 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:03,909 - agent.ComputerAgent - INFO - Computer: click({'x': 793, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 793, 'y': 41})\n",
+ " 34%|█████████████---------------------------| 2523/7340 [85:45<163:44, 29.4 steps/min]2025-08-11 16:52:04,549 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:52:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2524/7340 [85:48<163:44, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 34%|█████████████---------------------------| 2524/7340 [85:49<163:45, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:08,875 - agent.ComputerAgent - INFO - Computer: click({'x': 821, 'y': 423})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 821, 'y': 423})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2524/7340 [85:50<163:47, 29.4 steps/min]2025-08-11 16:52:09,566 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:52:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2525/7340 [85:51<163:43, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:11,251 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:52:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2525/7340 [85:53<163:46, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 34%|█████████████---------------------------| 2525/7340 [85:54<163:48, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:14,093 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6195bb79-4eff-4d3b-8b67-f28a4e6a73fa/close \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2525/7340 [85:55<163:51, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:16,049 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:52:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2526/7340 [85:58<163:50, 29.4 steps/min]\u001b[92m16:52:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:18,940 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'down'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:20,518 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 34%|█████████████---------------------------| 2526/7340 [86:02<163:58, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m16:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2528/7340 [86:03<163:49, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "\u001b[92m16:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:23,432 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:52:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 34%|█████████████---------------------------| 2528/7340 [86:05<163:51, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 34%|█████████████---------------------------| 2528/7340 [86:06<163:54, 29.4 steps/min]\u001b[92m16:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:25,470 - agent.ComputerAgent - INFO - Computer: click({'x': 930, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 930, 'y': 167})\n",
+ "\u001b[92m16:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:26,112 - agent.ComputerAgent - INFO - Computer: click({'x': 527, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 527, 'y': 278})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:26,778 - agent.ComputerAgent - INFO - Computer: double_click({'x': 357, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 357, 'y': 294})\n",
+ " 34%|█████████████---------------------------| 2528/7340 [86:08<163:58, 29.3 steps/min]\u001b[92m16:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:27,492 - agent.ComputerAgent - INFO - Computer: click({'x': 755, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 755, 'y': 88})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:28,169 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 635, 'x': 287, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 635, 'x': 287, 'y': 331})\n",
+ "\u001b[92m16:52:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:52:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:28,796 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 162})\n",
+ "2025-08-11 16:52:29,550 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:30,906 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 34%|█████████████---------------------------| 2532/7340 [86:12<163:42, 29.4 steps/min]2025-08-11 16:52:31,555 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 156})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:32,819 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:52:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:33,491 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ " 35%|█████████████---------------------------| 2535/7340 [86:15<163:29, 29.4 steps/min]\u001b[92m16:52:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:52:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:34,183 - agent.ComputerAgent - INFO - Computer: double_click({'x': 296, 'y': 259})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 296, 'y': 259})\n",
+ "2025-08-11 16:52:34,822 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:52:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:52:35,519 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2537/7340 [86:18<163:23, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:52:37,689 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m16:52:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2537/7340 [86:19<163:25, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2537/7340 [86:20<163:27, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:52:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:39,559 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:52:39,561 - agent.ComputerAgent - INFO - Computer: move({'x': 856, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 856, 'y': 412})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2537/7340 [86:22<163:31, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:41,269 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:52:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:52:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:41,939 - agent.ComputerAgent - INFO - Computer: click({'x': 73, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 73, 'y': 202})\n",
+ "2025-08-11 16:52:42,582 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:52:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2539/7340 [86:24<163:23, 29.4 steps/min]2025-08-11 16:52:43,230 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:52:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:52:43,906 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:52:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:25<163:19, 29.4 steps/min]2025-08-11 16:52:44,578 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:52:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:52:45,645 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:52:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:27<163:23, 29.4 steps/min]2025-08-11 16:52:46,310 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m16:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:52:46,949 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:28<163:25, 29.4 steps/min]2025-08-11 16:52:47,615 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:29<163:27, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:30<163:29, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:50,370 - agent.ComputerAgent - INFO - Computer: click({'x': 725, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 725, 'y': 185})\n",
+ " 35%|█████████████---------------------------| 2540/7340 [86:32<163:31, 29.4 steps/min]2025-08-11 16:52:51,058 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:52:51,761 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2541/7340 [86:33<163:28, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2542/7340 [86:34<163:24, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:52:54,109 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 454, 'y': 325}, {'x': 512, 'y': 737}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 454, 'y': 325}, {'x': 512, 'y': 737}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2542/7340 [86:35<163:27, 29.4 steps/min]2025-08-11 16:52:54,727 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m16:52:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:52:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 35%|█████████████---------------------------| 2543/7340 [86:37<163:24, 29.4 steps/min]\u001b[92m16:52:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:52:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:52:57,096 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 136})\n",
+ "\u001b[92m16:52:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:52:58,422 - agent.ComputerAgent - INFO - Computer: type({'text': 'Microsoft JhengHei'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Microsoft JhengHei'})\n",
+ " 35%|█████████████---------------------------| 2543/7340 [86:40<163:29, 29.3 steps/min]2025-08-11 16:52:59,090 - agent.ComputerAgent - INFO - Computer: click({'x': 914, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 914, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:52:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:53:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|█████████████---------------------------| 2546/7340 [86:42<163:15, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:01,053 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:53:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:01,726 - agent.ComputerAgent - INFO - Computer: click({'x': 352, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 352, 'y': 294})\n",
+ " 35%|█████████████---------------------------| 2547/7340 [86:43<163:11, 29.4 steps/min]\u001b[92m16:53:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:02,408 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 402})\n",
+ " 35%|█████████████---------------------------| 2548/7340 [86:44<163:07, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:04,070 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m16:53:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2549/7340 [86:45<163:04, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2549/7340 [86:46<163:06, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:06,425 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 35%|█████████████---------------------------| 2549/7340 [86:48<163:09, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/797f1798-0199-4d66-a503-1c5a8d488911/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:07,620 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:53:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2550/7340 [86:49<163:05, 29.4 steps/min]2025-08-11 16:53:08,303 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:53:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:08,940 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:53:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:53:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2551/7340 [86:51<163:03, 29.4 steps/min]2025-08-11 16:53:10,264 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2551/7340 [86:52<163:05, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:11,590 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 143})\n",
+ "2025-08-11 16:53:12,250 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:12,914 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2551/7340 [86:55<163:10, 29.3 steps/min]\u001b[92m16:53:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:14,231 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:15,272 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 643})\n",
+ " 35%|█████████████---------------------------| 2552/7340 [86:57<163:08, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:53:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:16,457 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 264})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 264})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:18,827 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'q'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'q'})\n",
+ " 35%|█████████████---------------------------| 2553/7340 [87:00<163:08, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:20,130 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:20,800 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2555/7340 [87:03<163:02, 29.3 steps/min]2025-08-11 16:53:22,138 - agent.ComputerAgent - INFO - Computer: click({'x': 464, 'y': 316})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 464, 'y': 316})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:22,763 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:53:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2556/7340 [87:04<162:58, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:23,449 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:53:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:24,087 - agent.ComputerAgent - INFO - Computer: click({'x': 474, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 474, 'y': 202})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:25,441 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:53:25,441 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 35%|█████████████---------------------------| 2557/7340 [87:07<162:57, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:26,745 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+right'})\n",
+ " 35%|█████████████---------------------------| 2560/7340 [87:08<162:42, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:27,909 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:53:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2560/7340 [87:11<162:47, 29.4 steps/min]\u001b[92m16:53:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:30,303 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2560/7340 [87:12<162:49, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:30,970 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:31,628 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 189})\n",
+ "\u001b[92m16:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:32,277 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:53:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2560/7340 [87:14<162:52, 29.3 steps/min]2025-08-11 16:53:33,283 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 35%|█████████████---------------------------| 2561/7340 [87:15<162:50, 29.3 steps/min]\u001b[92m16:53:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:34,980 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:53:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:35,634 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2562/7340 [87:17<162:47, 29.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:36,677 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:53:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:53:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:53:38,002 - agent.ComputerAgent - INFO - Computer: click({'x': 54, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 54, 'y': 164})\n",
+ " 35%|█████████████---------------------------| 2563/7340 [87:19<162:45, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:39,339 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 16:53:39,980 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:53:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:53:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2564/7340 [87:22<162:45, 29.3 steps/min]2025-08-11 16:53:41,351 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:53:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:42,011 - agent.ComputerAgent - INFO - Computer: click({'x': 796, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 796, 'y': 41})\n",
+ " 35%|█████████████---------------------------| 2564/7340 [87:23<162:47, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/88706cb5-896e-4bf5-8b52-5df252945e00/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:43,178 - agent.ComputerAgent - INFO - Computer: click({'x': 769, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 769, 'y': 532})\n",
+ " 35%|█████████████---------------------------| 2565/7340 [87:24<162:43, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:43,817 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|█████████████---------------------------| 2566/7340 [87:26<162:41, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:53:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:46,678 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 295})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 295})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|█████████████---------------------------| 2566/7340 [87:29<162:45, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:47,985 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:53:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:48,683 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:53:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2572/7340 [87:30<162:13, 29.4 steps/min]2025-08-11 16:53:49,325 - agent.ComputerAgent - INFO - Computer: click({'x': 900, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 900, 'y': 129})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:50,642 - agent.ComputerAgent - INFO - Computer: type({'text': 'Microsoft JhengHei'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Microsoft JhengHei'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:53:51,311 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:53:52,665 - agent.ComputerAgent - INFO - Computer: type({'text': 'WeekendRed'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'WeekendRed'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:53:53,974 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+down'})\n",
+ " 35%|██████████████--------------------------| 2573/7340 [87:35<162:17, 29.4 steps/min]2025-08-11 16:53:54,644 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:53:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:53:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:53:57,457 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ " 35%|██████████████--------------------------| 2576/7340 [87:39<162:06, 29.4 steps/min]\u001b[92m16:53:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:53:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:53:58,474 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 659})\n",
+ "\u001b[92m16:53:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:54:00,496 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:54:00,498 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 17, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 17, 'y': 334})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a39ee9df-d3ba-456a-95cf-3a11a826583b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 35%|██████████████--------------------------| 2576/7340 [87:42<162:13, 29.4 steps/min]\u001b[92m16:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:54:01,839 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:03,488 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:54:03,489 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 524})\n",
+ "\u001b[92m16:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2578/7340 [87:45<162:05, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:04,142 - agent.ComputerAgent - INFO - Computer: double_click({'x': 196, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 196, 'y': 105})\n",
+ "2025-08-11 16:54:04,815 - agent.ComputerAgent - INFO - Computer: click({'x': 962, 'y': 672})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 962, 'y': 672})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:54:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2579/7340 [87:47<162:04, 29.4 steps/min]2025-08-11 16:54:06,765 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:54:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:07,438 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:49<161:55, 29.4 steps/min]2025-08-11 16:54:08,104 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:08,785 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:50<161:58, 29.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:09,986 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:54:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:51<162:00, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:52<162:02, 29.4 steps/min]2025-08-11 16:54:12,033 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:54:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:54<162:05, 29.4 steps/min]\u001b[92m16:54:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:54:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2581/7340 [87:55<162:07, 29.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]\u001b[92m16:54:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:15,208 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:54:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2582/7340 [87:57<162:04, 29.4 steps/min]2025-08-11 16:54:15,852 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:54:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 16:54:16,511 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:54:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2582/7340 [87:58<162:06, 29.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:17,166 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:54:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ "2025-08-11 16:54:18,334 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:54:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2582/7340 [88:00<162:09, 29.3 steps/min]2025-08-11 16:54:19,008 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:54:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2582/7340 [88:01<162:11, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0803e2c2-9de2-40ff-93da-cb49f156cbba/close \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2582/7340 [88:02<162:13, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:54:21,744 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:54:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]\u001b[92m16:54:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2582/7340 [88:04<162:18, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 16:54:25,011 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'right'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2584/7340 [88:06<162:10, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/reset \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2585/7340 [88:07<162:06, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ "2025-08-11 16:54:26,848 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:54:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:54:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2585/7340 [88:09<162:10, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:29,680 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/projects/binder\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/projects/binder\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:54:31,448 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f2'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f2'})\n",
+ " 35%|██████████████--------------------------| 2585/7340 [88:13<162:16, 29.3 steps/min]2025-08-11 16:54:32,079 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:54:33,435 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 100})\n",
+ " 35%|██████████████--------------------------| 2588/7340 [88:15<162:02, 29.3 steps/min]2025-08-11 16:54:34,077 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:54:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:34,737 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:54:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2589/7340 [88:17<162:01, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:36,932 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:54:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2589/7340 [88:19<162:04, 29.3 steps/min]\u001b[92m16:54:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2589/7340 [88:21<162:08, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2589/7340 [88:22<162:10, 29.3 steps/min]2025-08-11 16:54:41,317 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:54:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2590/7340 [88:23<162:06, 29.3 steps/min]2025-08-11 16:54:42,515 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:54:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:43,166 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:54:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2590/7340 [88:25<162:10, 29.3 steps/min]\u001b[92m16:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:54:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2590/7340 [88:27<162:14, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:47,304 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+right'})\n",
+ " 35%|██████████████--------------------------| 2590/7340 [88:29<162:16, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:47,983 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:54:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:54:48,643 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:54:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2590/7340 [88:30<162:19, 29.3 steps/min]\u001b[92m16:54:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:54:49,333 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 35%|██████████████--------------------------| 2591/7340 [88:32<162:17, 29.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 35%|██████████████--------------------------| 2591/7340 [88:33<162:19, 29.3 steps/min]\u001b[92m16:54:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:54:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:54:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2591/7340 [88:34<162:21, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2592/7340 [88:36<162:19, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:55,753 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2592/7340 [88:37<162:21, 29.2 steps/min]2025-08-11 16:54:56,430 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:54:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2592/7340 [88:39<162:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:54:59,289 - agent.ComputerAgent - INFO - Computer: type({'text': 'export.jpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'export.jpg'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:00,620 - agent.ComputerAgent - INFO - Computer: type({'text': 'git status\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'git status\\n'})\n",
+ " 35%|██████████████--------------------------| 2595/7340 [88:43<162:13, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:55:02,953 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2595/7340 [88:44<162:16, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2595/7340 [88:46<162:20, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2596/7340 [88:47<162:16, 29.2 steps/min]2025-08-11 16:55:07,213 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:55:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2596/7340 [88:48<162:18, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:07,900 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:55:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:09,649 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'right'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2596/7340 [88:52<162:23, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:55:11,047 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:55:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2597/7340 [88:55<162:23, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:55:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:55:13,741 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 291})\n",
+ " 35%|██████████████--------------------------| 2599/7340 [88:56<162:13, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:15,389 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:55:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2599/7340 [88:57<162:15, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:16,582 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:55:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2599/7340 [88:58<162:18, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:55:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:55:17,782 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 165})\n",
+ " 35%|██████████████--------------------------| 2600/7340 [89:00<162:16, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2600/7340 [89:01<162:17, 29.2 steps/min]2025-08-11 16:55:20,970 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:55:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 35%|██████████████--------------------------| 2601/7340 [89:02<162:14, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2601/7340 [89:03<162:16, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2601/7340 [89:04<162:18, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:24,374 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:55:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:25,705 - agent.ComputerAgent - INFO - Computer: type({'text': 'git add -A && git commit -m \"daily update\" && git push -u origin main\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'git add -A && git commit -m \"daily update\" && git push -u origin main\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2601/7340 [89:07<162:22, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:08<162:19, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:09<162:20, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:10<162:22, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:13<162:28, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:14<162:30, 29.2 steps/min]2025-08-11 16:55:33,699 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:55:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:20<162:41, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:55:41,087 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+right'})\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:22<162:45, 29.1 steps/min]\u001b[92m16:55:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:55:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:55:42,394 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:55:43,693 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ " 35%|██████████████--------------------------| 2602/7340 [89:25<162:50, 29.1 steps/min]\u001b[92m16:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/514e0362-c0b3-4216-989f-d260ec405efb/close \"HTTP/1.1 200 OK\"\n",
+ " 35%|██████████████--------------------------| 2605/7340 [89:27<162:36, 29.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:55:47,023 - agent.ComputerAgent - INFO - Computer: click({'x': 771, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 771, 'y': 241})\n",
+ " 35%|██████████████--------------------------| 2605/7340 [89:28<162:38, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:55:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2606/7340 [89:29<162:34, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:55:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 16:55:48,908 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 34})\n",
+ " 36%|██████████████--------------------------| 2606/7340 [89:31<162:36, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2607/7340 [89:32<162:32, 29.1 steps/min]2025-08-11 16:55:51,047 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:55:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2607/7340 [89:34<162:36, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.19s/it]29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ "2025-08-11 16:55:54,756 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:55:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2607/7340 [89:36<162:41, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:55:55,598 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:55:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2607/7340 [89:37<162:42, 29.1 steps/min]\u001b[92m16:55:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:55:56,691 - agent.ComputerAgent - INFO - Computer: click({'x': 796, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 796, 'y': 76})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 36%|██████████████--------------------------| 2607/7340 [89:38<162:44, 29.1 steps/min]\u001b[92m16:55:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:55:57,832 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 249})\n",
+ " 36%|██████████████--------------------------| 2609/7340 [89:43<162:42, 29.1 steps/min]\u001b[92m16:56:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:56:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:03,046 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:56:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2609/7340 [89:44<162:44, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:04,208 - agent.ComputerAgent - INFO - Computer: click({'x': 466, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 466, 'y': 183})\n",
+ " 36%|██████████████--------------------------| 2609/7340 [89:45<162:46, 29.1 steps/min]2025-08-11 16:56:04,846 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2610/7340 [89:47<162:43, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2610/7340 [89:48<162:44, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2610/7340 [89:50<162:48, 29.1 steps/min]\u001b[92m16:56:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:56:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:09,062 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 271, 'y': 283}, {'x': 209, 'y': 283}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 271, 'y': 283}, {'x': 209, 'y': 283}]})\n",
+ " 36%|██████████████--------------------------| 2610/7340 [89:51<162:50, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:10,216 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:56:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:10,848 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:56:10,849 - agent.ComputerAgent - INFO - Computer: click({'x': 683, 'y': 516})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 683, 'y': 516})\n",
+ " 36%|██████████████--------------------------| 2611/7340 [89:52<162:46, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:12,159 - agent.ComputerAgent - INFO - Agent: All changes in ~/projects/binder were added, committed with the message \"daily update\", and pushed to origin main (upstream set).\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: All changes in ~/projects/binder were added, committed with the message \"daily update\", and pushed to origin main (upstream set).\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 16:56:12,788 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 423\n",
+ " - prompt_tokens: 5347\n",
+ " - total_tokens: 5770\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0109\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 423\n",
+ " - prompt_tokens: 5347\n",
+ " - total_tokens: 5770\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0109\n",
+ " 36%|██████████████--------------------------| 2613/7340 [89:55<162:40, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:15,002 - agent.ComputerAgent - INFO - Computer: wait({'x': 774, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({'x': 774, 'y': 241})\n",
+ " 36%|██████████████--------------------------| 2614/7340 [89:56<162:37, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2628/7340 [89:57<161:18, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:56:16,685 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:56:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8183b94f-6940-4d86-b82b-6ddba6fc8cca/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2628/7340 [89:59<161:20, 29.2 steps/min]2025-08-11 16:56:17,970 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:56:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:18,635 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:56:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 503 Service Unavailable\"\n",
+ " 36%|██████████████--------------------------| 2628/7340 [90:00<161:22, 29.2 steps/min]INFO:openai._base_client:Retrying request to /chat/completions in 0.392967 seconds\n",
+ "\u001b[92m16:56:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:19,787 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 113, 'y': 207}, {'x': 91, 'y': 322}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 113, 'y': 207}, {'x': 91, 'y': 322}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2628/7340 [90:02<161:26, 29.2 steps/min]\u001b[92m16:56:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2629/7340 [90:05<161:25, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.79s/it]\u001b[92m16:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 36%|██████████████--------------------------| 2630/7340 [90:06<161:23, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.75s/it]2025-08-11 16:56:26,375 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:56:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2630/7340 [90:08<161:25, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]\n",
+ "2025-08-11 16:56:27,025 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 36%|██████████████--------------------------| 2630/7340 [90:10<161:28, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:30,041 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 526, 'y': 432})\n",
+ " 36%|██████████████--------------------------| 2630/7340 [90:11<161:31, 29.2 steps/min]\u001b[92m16:56:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:30,689 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 34})\n",
+ " 36%|██████████████--------------------------| 2631/7340 [90:12<161:27, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:56:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:31,859 - agent.ComputerAgent - INFO - Computer: click({'x': 324, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 324, 'y': 446})\n",
+ " 36%|██████████████--------------------------| 2633/7340 [90:13<161:18, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:33,027 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2634/7340 [90:14<161:14, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:34,100 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 40})\n",
+ " 36%|██████████████--------------------------| 2634/7340 [90:15<161:16, 29.2 steps/min]\u001b[92m16:56:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:35,298 - agent.ComputerAgent - INFO - Computer: double_click({'x': 469, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 469, 'y': 205})\n",
+ " 36%|██████████████--------------------------| 2635/7340 [90:17<161:12, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2637/7340 [90:18<161:02, 29.2 steps/min]\u001b[92m16:56:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:36,999 - agent.ComputerAgent - INFO - Computer: click({'x': 859, 'y': 295})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 859, 'y': 295})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:37,717 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:56:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:38,369 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2637/7340 [90:20<161:06, 29.2 steps/min]\u001b[92m16:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:39,671 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 351, 'y': 294}, {'x': 361, 'y': 294}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 351, 'y': 294}, {'x': 361, 'y': 294}]})\n",
+ " 36%|██████████████--------------------------| 2638/7340 [90:21<161:03, 29.2 steps/min]\u001b[92m16:56:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:40,348 - agent.ComputerAgent - INFO - Computer: click({'x': 984, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 984, 'y': 576})\n",
+ "2025-08-11 16:56:41,006 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 503 Service Unavailable\"\n",
+ " 36%|██████████████--------------------------| 2639/7340 [90:22<160:59, 29.2 steps/min]INFO:openai._base_client:Retrying request to /chat/completions in 0.447624 seconds\n",
+ "\u001b[92m16:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:41,671 - agent.ComputerAgent - INFO - Computer: click({'x': 466, 'y': 394})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 466, 'y': 394})\n",
+ "2025-08-11 16:56:42,341 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:56:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:43,007 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2640/7340 [90:24<160:57, 29.2 steps/min]\u001b[92m16:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:44,442 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 101})\n",
+ " 36%|██████████████--------------------------| 2641/7340 [90:26<160:54, 29.2 steps/min]\u001b[92m16:56:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:45,068 - agent.ComputerAgent - INFO - Computer: click({'x': 340, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 340, 'y': 406})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2642/7340 [90:27<160:50, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:46,634 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:56:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2643/7340 [90:28<160:47, 29.2 steps/min]2025-08-11 16:56:47,305 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 433, 'y': 323}, {'x': 433, 'y': 713}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 433, 'y': 323}, {'x': 433, 'y': 713}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:56:47,965 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:56:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2644/7340 [90:29<160:43, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:48,629 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2645/7340 [90:31<160:41, 29.2 steps/min]\u001b[92m16:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:50,643 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:56:51,316 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:56:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2646/7340 [90:33<160:38, 29.2 steps/min]\u001b[92m16:56:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:51,971 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 34})\n",
+ "2025-08-11 16:56:52,645 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:56:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2646/7340 [90:34<160:40, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:56:53,295 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:56:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:54,620 - agent.ComputerAgent - INFO - Computer: type({'text': 'Happy Family'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Happy Family'})\n",
+ " 36%|██████████████--------------------------| 2647/7340 [90:36<160:38, 29.2 steps/min]\u001b[92m16:56:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:56:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 36%|██████████████--------------------------| 2649/7340 [90:37<160:28, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2649/7340 [90:38<160:30, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:56:56,955 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:56:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:56:57,646 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:56:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:56:58,353 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 113, 'y': 183}, {'x': 106, 'y': 325}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 113, 'y': 183}, {'x': 106, 'y': 325}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2649/7340 [90:40<160:33, 29.2 steps/min]2025-08-11 16:56:59,027 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:56:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:56:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:56:59,728 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 632})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 632})\n",
+ "2025-08-11 16:57:00,365 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:57:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2650/7340 [90:42<160:31, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:01,027 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:57:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:01,670 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:57:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2651/7340 [90:43<160:28, 29.2 steps/min]2025-08-11 16:57:02,337 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:57:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 36%|██████████████--------------------------| 2652/7340 [90:44<160:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2652/7340 [90:45<160:26, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:04,667 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:57:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:06,385 - agent.ComputerAgent - INFO - Computer: type({'text': '.jpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '.jpg'})\n",
+ "\u001b[92m16:57:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2652/7340 [90:48<160:30, 29.2 steps/min]2025-08-11 16:57:07,048 - agent.ComputerAgent - INFO - Computer: click({'x': 869, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 869, 'y': 182})\n",
+ "2025-08-11 16:57:07,735 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2653/7340 [90:49<160:27, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:09,087 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:10,442 - agent.ComputerAgent - INFO - Agent: I tried to access your Google Drive but I’m blocked by Google’s 2‑Step Verification for the account shown on screen. Without a verification code (or a public link to the invoice), I can’t open the file to extract the table.\n",
+ "\n",
+ "Please provide one of the following so I can proceed immediately:\n",
+ "- A current Google Authenticator verification code for this login, or\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice on Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- Share the invoice via another public link (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access to the invoice, I will:\n",
+ "1) Open the invoice and capture the table data (using copy/paste or OCR if needed).\n",
+ "2) Create a LibreOffice Calc spreadsheet with the extracted fields.\n",
+ "3) Export it as .xlsx and save it on the Desktop with a clear name (for example, Invoice_Table.xlsx), then confirm completion.\n",
+ "\n",
+ "I’m ready to continue as soon as you provide access.\n",
+ "INFO:agent.ComputerAgent:Agent: I tried to access your Google Drive but I’m blocked by Google’s 2‑Step Verification for the account shown on screen. Without a verification code (or a public link to the invoice), I can’t open the file to extract the table.\n",
+ "\n",
+ "Please provide one of the following so I can proceed immediately:\n",
+ "- A current Google Authenticator verification code for this login, or\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice on Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- Share the invoice via another public link (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access to the invoice, I will:\n",
+ "1) Open the invoice and capture the table data (using copy/paste or OCR if needed).\n",
+ "2) Create a LibreOffice Calc spreadsheet with the extracted fields.\n",
+ "3) Export it as .xlsx and save it on the Desktop with a clear name (for example, Invoice_Table.xlsx), then confirm completion.\n",
+ "\n",
+ "I’m ready to continue as soon as you provide access.\n",
+ "2025-08-11 16:57:11,117 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1370\n",
+ " - prompt_tokens: 2637\n",
+ " - total_tokens: 4007\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0170\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1370\n",
+ " - prompt_tokens: 2637\n",
+ " - total_tokens: 4007\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0170\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2655/7340 [90:53<160:23, 29.2 steps/min]\u001b[92m16:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:12,466 - agent.ComputerAgent - INFO - Computer: click({'x': 280, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 280, 'y': 219})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2656/7340 [90:54<160:19, 29.2 steps/min]\u001b[92m16:57:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:13,653 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 658, 'scroll_x': 0, 'x': 416, 'y': 660})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 658, 'scroll_x': 0, 'x': 416, 'y': 660})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:14,306 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m16:57:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2657/7340 [90:56<160:16, 29.2 steps/min]2025-08-11 16:57:14,948 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:57:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2658/7340 [90:57<160:13, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:16,284 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:16,935 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2658/7340 [90:58<160:15, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:57:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:17,631 - agent.ComputerAgent - INFO - Computer: click({'x': 731, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 731, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:19,343 - agent.ComputerAgent - INFO - Computer: type({'text': \"Let's start\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"Let's start\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:22,033 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 36%|██████████████--------------------------| 2659/7340 [91:03<160:18, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:22,688 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:57:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:24,024 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'down'})\n",
+ "2025-08-11 16:57:24,699 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:26,068 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl'})\n",
+ " 36%|██████████████--------------------------| 2661/7340 [91:07<160:14, 29.2 steps/min]2025-08-11 16:57:27,115 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:57:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:27,808 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "\u001b[92m16:57:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2663/7340 [91:10<160:08, 29.2 steps/min]2025-08-11 16:57:29,943 - agent.ComputerAgent - INFO - Computer: click({'x': 987, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 987, 'y': 627})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/reset \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2664/7340 [91:11<160:04, 29.2 steps/min]2025-08-11 16:57:30,568 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:57:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:31,263 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:57:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2665/7340 [91:13<160:01, 29.2 steps/min]\u001b[92m16:57:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:31,958 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 627})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2665/7340 [91:14<160:03, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:33,637 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 36%|██████████████--------------------------| 2666/7340 [91:15<159:59, 29.2 steps/min]2025-08-11 16:57:34,301 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:57:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:57:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:35,655 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 101})\n",
+ " 36%|██████████████--------------------------| 2666/7340 [91:17<160:02, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:57:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:36,978 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:57:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2668/7340 [91:18<159:53, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:37,674 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:57:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:57:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:38,317 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 456})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:39,663 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ " 36%|██████████████--------------------------| 2668/7340 [91:21<159:58, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:57:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:40,940 - agent.ComputerAgent - INFO - Computer: click({'x': 707, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 707, 'y': 306})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:42,231 - agent.ComputerAgent - INFO - Computer: type({'text': ' or .png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' or .png'})\n",
+ " 36%|██████████████--------------------------| 2669/7340 [91:23<159:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:43,497 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:57:43,498 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+b'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2671/7340 [91:25<159:49, 29.2 steps/min]2025-08-11 16:57:44,827 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 223})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:45,472 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:57:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2671/7340 [91:27<159:51, 29.2 steps/min]2025-08-11 16:57:46,139 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:57:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:46,784 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:57:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:57:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 36%|██████████████--------------------------| 2672/7340 [91:28<159:48, 29.2 steps/min]2025-08-11 16:57:47,477 - agent.ComputerAgent - INFO - Computer: double_click({'x': 472, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 472, 'y': 206})\n",
+ "2025-08-11 16:57:48,102 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:57:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 36%|██████████████--------------------------| 2672/7340 [91:30<159:51, 29.2 steps/min]\u001b[92m16:57:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:57:50,065 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 36%|██████████████--------------------------| 2673/7340 [91:31<159:48, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:57:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:57:50,683 - agent.ComputerAgent - INFO - Computer: double_click({'x': 194, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 194, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:51,308 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:57:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2680/7340 [91:33<159:11, 29.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:52,652 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81b23870-39ed-4649-9729-1d4809f713ec/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2681/7340 [91:34<159:08, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:53,984 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:57:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:55,750 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f11'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f11'})\n",
+ " 37%|██████████████--------------------------| 2681/7340 [91:37<159:13, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:56,416 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m16:57:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:57,117 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:57:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:57,761 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:57:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:57:58,430 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:57:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:40<159:12, 29.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:59,067 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:57:59,710 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:42<159:15, 29.2 steps/min]\u001b[92m16:58:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:43<159:17, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1473c3f2-39e1-4aff-8d55-0e23dc25a055/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]159:19, 29.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:46<159:22, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]29.2 steps/min]2025-08-11 16:58:06,748 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:48<159:26, 29.2 steps/min]\u001b[92m16:58:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2682/7340 [91:49<159:28, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.57s/it]\n",
+ "2025-08-11 16:58:09,263 - agent.ComputerAgent - INFO - Computer: key({'keys': 'shift+f10'})\n",
+ "INFO:agent.ComputerAgent:Computer: key({'keys': 'shift+f10'})\n",
+ "2025-08-11 16:58:09,264 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m16:58:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: key\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:58:10,641 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f2'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f2'})\n",
+ " 37%|██████████████--------------------------| 2684/7340 [91:53<159:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:12,399 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:58:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2684/7340 [91:54<159:25, 29.2 steps/min]\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.89s/it] \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2685/7340 [91:55<159:22, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2685/7340 [91:56<159:23, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 16:58:15,695 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m16:58:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:17,072 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:19,662 - agent.ComputerAgent - INFO - Agent: I restored the poster file “poster_party_night.webp” from the Trash to its original location on your Desktop and opened it to confirm. You can find it on the Desktop.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I restored the poster file “poster_party_night.webp” from the Trash to its original location on your Desktop and opened it to confirm. You can find it on the Desktop.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 16:58:20,291 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 240\n",
+ " - prompt_tokens: 10837\n",
+ " - total_tokens: 11077\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0159\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 240\n",
+ " - prompt_tokens: 10837\n",
+ " - total_tokens: 11077\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0159\n",
+ " 37%|██████████████--------------------------| 2686/7340 [92:02<159:27, 29.2 steps/min]2025-08-11 16:58:20,967 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:58:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2686/7340 [92:04<159:31, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:58:23,636 - agent.ComputerAgent - INFO - Computer: type({'text': \"Let's start\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"Let's start\"})\n",
+ " 37%|██████████████--------------------------| 2686/7340 [92:05<159:33, 29.2 steps/min]\u001b[92m16:58:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:24,315 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 625, 'scroll_x': 0, 'x': 416, 'y': 661})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 625, 'scroll_x': 0, 'x': 416, 'y': 661})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 16:58:24,919 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2688/7340 [92:06<159:24, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a46ee6f6-d167-47c4-ad83-e16b88450253/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:26,284 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m16:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2689/7340 [92:08<159:21, 29.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2689/7340 [92:09<159:23, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2695/7340 [92:10<158:51, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2b43eb21-4025-495a-8c66-358bfcac034b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 37%|██████████████--------------------------| 2696/7340 [92:11<158:48, 29.2 steps/min]2025-08-11 16:58:30,288 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m16:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2696/7340 [92:12<158:49, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:31,450 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m16:58:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2696/7340 [92:13<158:51, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:32,626 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2696/7340 [92:14<158:53, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2696/7340 [92:15<158:55, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:04, 2.21s/it]\u001b[92m16:58:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2697/7340 [92:17<158:52, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:36,503 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:01, 1.90s/it]INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m16:58:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.79s/it]29.2 steps/min]\n",
+ " 37%|██████████████--------------------------| 2698/7340 [92:20<158:52, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:39,358 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:58:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2698/7340 [92:21<158:54, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 37%|██████████████--------------------------| 2699/7340 [92:23<158:52, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]\n",
+ "2025-08-11 16:58:42,591 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m16:58:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:58:44,312 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:58:45,654 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 37%|██████████████--------------------------| 2699/7340 [92:27<158:58, 29.2 steps/min]2025-08-11 16:58:46,324 - agent.ComputerAgent - INFO - Computer: click({'x': 254, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 254, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m16:58:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:47,595 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:58:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:58:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:48,261 - agent.ComputerAgent - INFO - Computer: click({'x': 226, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 226, 'y': 155})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:58:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:58:49,648 - agent.ComputerAgent - INFO - Computer: type({'text': 'export.jpg'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'export.jpg'})\n",
+ "2025-08-11 16:58:50,279 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ " 37%|██████████████--------------------------| 2700/7340 [92:32<159:01, 29.2 steps/min]\u001b[92m16:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:58:50,981 - agent.ComputerAgent - INFO - Computer: move({'x': 300, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 300, 'y': 215})\n",
+ "\u001b[92m16:58:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:58:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:58:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:51,658 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ "2025-08-11 16:58:52,357 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 653, 'scroll_x': 0, 'x': 334, 'y': 700})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 653, 'scroll_x': 0, 'x': 334, 'y': 700})\n",
+ "2025-08-11 16:58:53,008 - agent.ComputerAgent - INFO - Computer: click({'x': 577, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 577, 'y': 457})\n",
+ " 37%|██████████████--------------------------| 2703/7340 [92:34<158:49, 29.2 steps/min]\u001b[92m16:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:58:53,649 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 474, 'y': 325}, {'x': 101, 'y': 737}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 474, 'y': 325}, {'x': 101, 'y': 737}]})\n",
+ "2025-08-11 16:58:54,322 - agent.ComputerAgent - INFO - Computer: click({'x': 657, 'y': 361})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 657, 'y': 361})\n",
+ " 37%|██████████████--------------------------| 2709/7340 [92:37<158:19, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:55,970 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m16:58:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2709/7340 [92:40<158:24, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:58:59,701 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:58:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2709/7340 [92:41<158:27, 29.2 steps/min]2025-08-11 16:59:00,358 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m16:59:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:01,384 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m16:59:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:02,057 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:59:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 37%|██████████████--------------------------| 2710/7340 [92:43<158:25, 29.2 steps/min]2025-08-11 16:59:03,240 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m16:59:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2710/7340 [92:45<158:27, 29.2 steps/min]2025-08-11 16:59:03,909 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m16:59:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:04,558 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m16:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2710/7340 [92:46<158:30, 29.2 steps/min]2025-08-11 16:59:05,228 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:05,937 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1f48e361-2592-41ee-8818-d6e9174fe800/reset \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2710/7340 [92:47<158:32, 29.2 steps/min]2025-08-11 16:59:06,637 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m16:59:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:59:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2710/7340 [92:49<158:34, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:59:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:08,463 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 380})\n",
+ " 37%|██████████████--------------------------| 2711/7340 [92:51<158:32, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2711/7340 [92:52<158:35, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:59:11,668 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:12,337 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m16:59:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:59:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2711/7340 [92:54<158:37, 29.2 steps/min]2025-08-11 16:59:12,984 - agent.ComputerAgent - INFO - Computer: click({'x': 924, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 924, 'y': 167})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 37%|██████████████--------------------------| 2712/7340 [92:55<158:33, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2713/7340 [92:56<158:29, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:15,182 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m16:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2713/7340 [92:57<158:31, 29.2 steps/min]2025-08-11 16:59:15,858 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m16:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2713/7340 [92:58<158:33, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2713/7340 [92:59<158:35, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:18,527 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:59:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2713/7340 [93:00<158:38, 29.2 steps/min]\u001b[92m16:59:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:59:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:59:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:21,112 - agent.ComputerAgent - INFO - Computer: type({'text': 'Forward all to Gmail'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Forward all to Gmail'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:59:21,809 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 64})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:23,135 - agent.ComputerAgent - INFO - Computer: click({'x': 277, 'y': 37, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 277, 'y': 37, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2714/7340 [93:04<158:39, 29.2 steps/min]\u001b[92m16:59:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:24,452 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:59:24,453 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:25,781 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 16:59:26,441 - agent.ComputerAgent - INFO - Computer: click({'x': 270, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 270, 'y': 223})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:27,802 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'down'})\n",
+ " 37%|██████████████--------------------------| 2721/7340 [93:10<158:10, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:29,469 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m16:59:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:30,723 - agent.ComputerAgent - INFO - Computer: type({'text': 'Microsoft JhengHei'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Microsoft JhengHei'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2721/7340 [93:13<158:14, 29.2 steps/min]\u001b[92m16:59:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:32,662 - agent.ComputerAgent - INFO - Computer: type({'text': 'woman_sitting_by_the_tree_dim.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'woman_sitting_by_the_tree_dim.png'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 37%|██████████████--------------------------| 2722/7340 [93:15<158:12, 29.2 steps/min]\u001b[92m16:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m16:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:59:33,979 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 16:59:33,980 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 335})\n",
+ "\u001b[92m16:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:34,657 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m16:59:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:59:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:35,994 - agent.ComputerAgent - INFO - Computer: move({'x': 388, 'y': 705})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 388, 'y': 705})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m16:59:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2724/7340 [93:18<158:06, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:37,319 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m16:59:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:37,968 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:59:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:38,619 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m16:59:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:59:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 37%|██████████████--------------------------| 2726/7340 [93:20<157:59, 29.2 steps/min]2025-08-11 16:59:39,689 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m16:59:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:40,381 - agent.ComputerAgent - INFO - Computer: click({'x': 784, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 784, 'y': 40})\n",
+ "\u001b[92m16:59:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2726/7340 [93:22<158:02, 29.2 steps/min]2025-08-11 16:59:41,057 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:59:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:41,759 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 243})\n",
+ "2025-08-11 16:59:42,427 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:59:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:43,737 - agent.ComputerAgent - INFO - Computer: type({'text': 'export'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'export'})\n",
+ " 37%|██████████████--------------------------| 2729/7340 [93:26<157:53, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:46,247 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m16:59:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2729/7340 [93:28<157:55, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2729/7340 [93:29<157:57, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2729/7340 [93:30<157:58, 29.2 steps/min]2025-08-11 16:59:48,949 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m16:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:50,668 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 16:59:51,980 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+f11'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+f11'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2730/7340 [93:33<157:59, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m16:59:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d2c8050e-87aa-487a-9555-884be2298ade/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:53,299 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m16:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 16:59:53,934 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ " 37%|██████████████--------------------------| 2730/7340 [93:35<158:02, 29.2 steps/min]\u001b[92m16:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m16:59:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 16:59:54,623 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 53})\n",
+ "2025-08-11 16:59:55,292 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m16:59:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/reset \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2730/7340 [93:37<158:05, 29.2 steps/min]2025-08-11 16:59:55,968 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m16:59:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:57,001 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m16:59:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:57,679 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m16:59:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2731/7340 [93:39<158:03, 29.2 steps/min]2025-08-11 16:59:58,346 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m16:59:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:59,008 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 16:59:59,687 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m16:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2731/7340 [93:41<158:07, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:01,016 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:00:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:00:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2731/7340 [93:42<158:09, 29.1 steps/min]2025-08-11 17:00:01,705 - agent.ComputerAgent - INFO - Computer: click({'x': 331, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 331, 'y': 223})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:03,443 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+down'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+down'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2731/7340 [93:45<158:14, 29.1 steps/min]\u001b[92m17:00:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:04,785 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:00:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2732/7340 [93:47<158:11, 29.1 steps/min]\u001b[92m17:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:06,194 - agent.ComputerAgent - INFO - Computer: move({'x': 375, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 375, 'y': 214})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:00:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:00:07,506 - agent.ComputerAgent - INFO - Computer: click({'x': 878, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 878, 'y': 168})\n",
+ " 37%|██████████████--------------------------| 2733/7340 [93:49<158:09, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:08,834 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 53})\n",
+ "\u001b[92m17:00:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2735/7340 [93:50<158:00, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:09,491 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ " 37%|██████████████--------------------------| 2736/7340 [93:51<157:56, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2737/7340 [93:52<157:52, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 37%|██████████████--------------------------| 2737/7340 [93:53<157:54, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:12,375 - agent.ComputerAgent - INFO - Computer: click({'x': 322, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 322, 'y': 35})\n",
+ "2025-08-11 17:00:13,033 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2737/7340 [93:55<157:57, 29.1 steps/min]\u001b[92m17:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:15,072 - agent.ComputerAgent - INFO - Agent: I attempted to access your Google Drive, but I’m blocked by Google’s 2‑Step Verification and domain restrictions. The screen shows: “Couldn’t sign you in. Contact your domain admin for help.” Without a verification code or a public share link, I cannot open the invoice to extract the table.\n",
+ "\n",
+ "To proceed, please provide one of the following:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Google Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- A current Google Authenticator verification code for this login (if you prefer I access the Drive directly).\n",
+ "\n",
+ "Once I have access to the invoice, I will extract the table data, create a LibreOffice Calc spreadsheet, export it as .xlsx, and save it on the Desktop.\n",
+ "\n",
+ "This security block prevents completion right now.\n",
+ "INFO:agent.ComputerAgent:Agent: I attempted to access your Google Drive, but I’m blocked by Google’s 2‑Step Verification and domain restrictions. The screen shows: “Couldn’t sign you in. Contact your domain admin for help.” Without a verification code or a public share link, I cannot open the invoice to extract the table.\n",
+ "\n",
+ "To proceed, please provide one of the following:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Google Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- A current Google Authenticator verification code for this login (if you prefer I access the Drive directly).\n",
+ "\n",
+ "Once I have access to the invoice, I will extract the table data, create a LibreOffice Calc spreadsheet, export it as .xlsx, and save it on the Desktop.\n",
+ "\n",
+ "This security block prevents completion right now.\n",
+ "2025-08-11 17:00:15,698 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1460\n",
+ " - prompt_tokens: 4911\n",
+ " - total_tokens: 6371\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1280\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0207\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1460\n",
+ " - prompt_tokens: 4911\n",
+ " - total_tokens: 6371\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1280\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0207\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:00:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2739/7340 [93:58<157:51, 29.1 steps/min]2025-08-11 17:00:17,041 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 406})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:00:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2741/7340 [93:59<157:42, 29.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:18,387 - agent.ComputerAgent - INFO - Computer: click({'x': 550, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 550, 'y': 627})\n",
+ "2025-08-11 17:00:19,012 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:00:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:19,686 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:00:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:00:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2742/7340 [94:02<157:41, 29.2 steps/min]2025-08-11 17:00:21,034 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 689, 'scroll_x': 0, 'x': 403, 'y': 697})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 689, 'scroll_x': 0, 'x': 403, 'y': 697})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:22,322 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:00:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:22,970 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:00:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:00:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2743/7340 [94:04<157:40, 29.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:00:23,593 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:00:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:24,279 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:00:24,280 - agent.ComputerAgent - INFO - Computer: click({'x': 497, 'y': 340})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 497, 'y': 340})\n",
+ "\u001b[92m17:00:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2744/7340 [94:06<157:36, 29.2 steps/min]2025-08-11 17:00:24,940 - agent.ComputerAgent - INFO - Computer: click({'x': 109, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 109, 'y': 245})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e2ffab0a-c998-4bbf-906b-d3aad0586220/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/daac505f-9423-4b29-b11c-9b23c5c9e3ee/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:26,989 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 37%|██████████████--------------------------| 2745/7340 [94:08<157:35, 29.2 steps/min]2025-08-11 17:00:27,607 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:00:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/687e10e4-fe9c-4767-a255-77d9b553a724/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:28,933 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:00:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2746/7340 [94:10<157:33, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:00:30,307 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:00:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2746/7340 [94:12<157:35, 29.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:31,018 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:00:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.83s/it]2025-08-11 17:00:32,576 - agent.ComputerAgent - INFO - Computer: type({'text': 'Forward to anonym-x2024@gmail.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Forward to anonym-x2024@gmail.com'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2746/7340 [94:15<157:41, 29.1 steps/min]2025-08-11 17:00:34,390 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:00:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.65s/it]2025-08-11 17:00:35,300 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:00:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]29.1 steps/min]\n",
+ "2025-08-11 17:00:35,987 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:00:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2747/7340 [94:19<157:42, 29.1 steps/min]2025-08-11 17:00:38,209 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:00:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:38,869 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:00:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2747/7340 [94:20<157:44, 29.1 steps/min]2025-08-11 17:00:39,530 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:00:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2747/7340 [94:23<157:49, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:43,892 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:45,200 - agent.ComputerAgent - INFO - Computer: click({'x': 330, 'y': 37, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 330, 'y': 37, 'button': 'left'})\n",
+ " 37%|██████████████--------------------------| 2747/7340 [94:26<157:55, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:47,203 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:00:47,204 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2749/7340 [94:29<157:48, 29.1 steps/min]\u001b[92m17:00:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:48,572 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:00:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:00:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2749/7340 [94:31<157:52, 29.1 steps/min]\u001b[92m17:00:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:00:50,684 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:00:50,685 - agent.ComputerAgent - INFO - Computer: click({'x': 302, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 302, 'y': 77})\n",
+ " 37%|██████████████--------------------------| 2750/7340 [94:33<157:50, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:53,553 - agent.ComputerAgent - INFO - Agent: I’m blocked by Google’s 2‑Step Verification and domain restrictions on the current screen (“Couldn’t sign you in. Contact your domain admin for help”). Without access to your Drive, I cannot open the new invoice to extract the table.\n",
+ "\n",
+ "To proceed, please provide one of the following:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Google Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- A current Google Authenticator verification code for this login (or a backup code), or\n",
+ "- Temporarily share the invoice via another public link (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access to the invoice, I will extract the table data and save it to a LibreOffice Calc .xlsx file on the Desktop.\n",
+ "INFO:agent.ComputerAgent:Agent: I’m blocked by Google’s 2‑Step Verification and domain restrictions on the current screen (“Couldn’t sign you in. Contact your domain admin for help”). Without access to your Drive, I cannot open the new invoice to extract the table.\n",
+ "\n",
+ "To proceed, please provide one of the following:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Google Drive, or\n",
+ "- Upload the invoice file here in the chat, or\n",
+ "- A current Google Authenticator verification code for this login (or a backup code), or\n",
+ "- Temporarily share the invoice via another public link (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access to the invoice, I will extract the table data and save it to a LibreOffice Calc .xlsx file on the Desktop.\n",
+ "2025-08-11 17:00:54,221 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1574\n",
+ " - prompt_tokens: 5103\n",
+ " - total_tokens: 6677\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0221\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1574\n",
+ " - prompt_tokens: 5103\n",
+ " - total_tokens: 6677\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0221\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:36<157:49, 29.1 steps/min]\u001b[92m17:00:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:55,544 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:00:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:56,240 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:00:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:37<157:51, 29.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:56,900 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:00:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:00:57,574 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:00:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:40<157:54, 29.1 steps/min]\u001b[92m17:00:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:00:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:00:59,591 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:00:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:41<157:57, 29.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:42<157:58, 29.0 steps/min]\u001b[92m17:01:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:01:01,268 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 53})\n",
+ " 37%|██████████████--------------------------| 2751/7340 [94:43<158:00, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:03,645 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 37%|██████████████--------------------------| 2752/7340 [94:45<157:58, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:05,476 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:01:06,102 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:01:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 37%|██████████████--------------------------| 2752/7340 [94:49<158:04, 29.0 steps/min]\u001b[92m17:01:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:09,077 - agent.ComputerAgent - INFO - Computer: type({'text': 'community.flightaware.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'community.flightaware.com'})\n",
+ " 38%|███████████████-------------------------| 2753/7340 [94:50<158:01, 29.0 steps/min]2025-08-11 17:01:09,761 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:01:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2754/7340 [94:51<157:58, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:12,472 - agent.ComputerAgent - INFO - Computer: type({'text': '@'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '@'})\n",
+ " 38%|███████████████-------------------------| 2754/7340 [94:54<158:02, 29.0 steps/min]\u001b[92m17:01:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:01:13,161 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 130})\n",
+ " 38%|███████████████-------------------------| 2756/7340 [94:56<157:54, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2756/7340 [94:57<157:55, 29.0 steps/min]2025-08-11 17:01:16,344 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:01:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2756/7340 [94:58<157:57, 29.0 steps/min]2025-08-11 17:01:17,022 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:01:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2756/7340 [94:59<157:59, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2756/7340 [95:00<158:01, 29.0 steps/min]2025-08-11 17:01:18,721 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:01:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2756/7340 [95:01<158:02, 29.0 steps/min]2025-08-11 17:01:20,392 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:01:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2756/7340 [95:03<158:06, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2756/7340 [95:04<158:07, 29.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:24,362 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 38%|███████████████-------------------------| 2756/7340 [95:06<158:11, 29.0 steps/min]\u001b[92m17:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:01:25,641 - agent.ComputerAgent - INFO - Computer: double_click({'x': 469, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 469, 'y': 205})\n",
+ " 38%|███████████████-------------------------| 2758/7340 [95:10<158:07, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:30,535 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:31,832 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2758/7340 [95:13<158:12, 29.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:32,501 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:01:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:01:33,515 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:01:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2759/7340 [95:15<158:09, 29.0 steps/min]2025-08-11 17:01:34,181 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:01:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2759/7340 [95:17<158:12, 29.0 steps/min]\u001b[92m17:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:01:36,561 - agent.ComputerAgent - INFO - Computer: click({'x': 239, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 239, 'y': 143})\n",
+ " 38%|███████████████-------------------------| 2759/7340 [95:18<158:14, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:20<158:12, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:21<158:14, 28.9 steps/min]2025-08-11 17:01:40,932 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:22<158:16, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:23<158:18, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:43,135 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:01:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:24<158:19, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:28<158:26, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:01:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:01:48,555 - agent.ComputerAgent - INFO - Computer: click({'x': 440, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 440, 'y': 219})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2760/7340 [95:30<158:28, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:31<158:25, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:32<158:26, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:51,881 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/reset \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:33<158:28, 28.9 steps/min]2025-08-11 17:01:53,043 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:01:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:34<158:30, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:01:54,211 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:01:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:35<158:32, 28.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:01:54,836 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:01:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:39<158:39, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:01:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2761/7340 [95:40<158:41, 28.9 steps/min]\u001b[92m17:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:00,276 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:01,607 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2763/7340 [95:44<158:35, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:04,473 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:02:04,474 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ " 38%|███████████████-------------------------| 2763/7340 [95:46<158:38, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:02:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2764/7340 [95:47<158:34, 28.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:02:06,837 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2764/7340 [95:48<158:37, 28.8 steps/min]2025-08-11 17:02:07,529 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:02:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:02:08,877 - agent.ComputerAgent - INFO - Computer: type({'text': \"Let's start\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"Let's start\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2764/7340 [95:50<158:40, 28.8 steps/min]2025-08-11 17:02:09,512 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:02:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2765/7340 [95:52<158:38, 28.8 steps/min]\u001b[92m17:02:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:11,788 - agent.ComputerAgent - INFO - Computer: click({'x': 361, 'y': 549})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 361, 'y': 549})\n",
+ " 38%|███████████████-------------------------| 2766/7340 [95:54<158:36, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:02:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2766/7340 [95:55<158:37, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:15,609 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:02:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2766/7340 [95:58<158:41, 28.8 steps/min]\u001b[92m17:02:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2766/7340 [95:59<158:43, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2766/7340 [96:00<158:45, 28.8 steps/min]2025-08-11 17:02:18,946 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2766/7340 [96:04<158:51, 28.8 steps/min]\u001b[92m17:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:23,756 - agent.ComputerAgent - INFO - Computer: click({'x': 932, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 932, 'y': 574})\n",
+ " 38%|███████████████-------------------------| 2767/7340 [96:06<158:50, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:26,627 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2767/7340 [96:09<158:54, 28.8 steps/min]\u001b[92m17:02:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:02:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2768/7340 [96:10<158:50, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2768/7340 [96:11<158:52, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:30,628 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2768/7340 [96:12<158:54, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2768/7340 [96:14<158:57, 28.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:33,833 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2768/7340 [96:15<158:59, 28.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:35,117 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 520, 'y': 452})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 520, 'y': 452})\n",
+ " 38%|███████████████-------------------------| 2769/7340 [96:22<159:06, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:42,384 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:02:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2769/7340 [96:24<159:08, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:02:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2769/7340 [96:25<159:10, 28.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:45,395 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2770/7340 [96:28<159:09, 28.7 steps/min]\u001b[92m17:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:47,035 - agent.ComputerAgent - INFO - Computer: click({'x': 347, 'y': 494})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 347, 'y': 494})\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:33<159:12, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/reset \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:34<159:13, 28.7 steps/min]2025-08-11 17:02:52,808 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:02:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:02:53,512 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:02:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:35<159:15, 28.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:02:54,863 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:37<159:18, 28.7 steps/min]\u001b[92m17:02:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:02:56,148 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:02:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:39<159:22, 28.7 steps/min]\u001b[92m17:02:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:02:58,483 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:02:58,484 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 43})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 43})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2771/7340 [96:40<159:23, 28.7 steps/min]2025-08-11 17:02:59,137 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:02:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2772/7340 [96:45<159:26, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:05,104 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:03:05,105 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2772/7340 [96:46<159:29, 28.6 steps/min]2025-08-11 17:03:05,803 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2773/7340 [96:50<159:30, 28.6 steps/min]\u001b[92m17:03:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:03:10,097 - agent.ComputerAgent - INFO - Computer: click({'x': 1010, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1010, 'y': 62})\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:52<159:27, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:11,755 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:03:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:53<159:29, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:03:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:54<159:31, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:56<159:34, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:57<159:36, 28.6 steps/min]2025-08-11 17:03:16,764 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:03:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2774/7340 [96:59<159:39, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:19,712 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2775/7340 [97:02<159:38, 28.6 steps/min]\u001b[92m17:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:03:21,917 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 426})\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:05<159:37, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:25,263 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:06<159:40, 28.6 steps/min]2025-08-11 17:03:26,413 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:03:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:08<159:42, 28.6 steps/min]2025-08-11 17:03:27,085 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:03:27,765 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:09<159:44, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:10<159:45, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:03:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2776/7340 [97:14<159:52, 28.5 steps/min]\u001b[92m17:03:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:03:33,711 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 231})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 231})\n",
+ " 38%|███████████████-------------------------| 2777/7340 [97:20<159:56, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:39,437 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:03:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2777/7340 [97:22<160:00, 28.5 steps/min]\u001b[92m17:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2777/7340 [97:23<160:01, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:43,184 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ " 38%|███████████████-------------------------| 2777/7340 [97:24<160:03, 28.5 steps/min]2025-08-11 17:03:44,357 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:03:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2777/7340 [97:26<160:05, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:03:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:03:45,253 - agent.ComputerAgent - INFO - Computer: click({'x': 883, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 883, 'y': 34})\n",
+ " 38%|███████████████-------------------------| 2778/7340 [97:32<160:10, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:51,575 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:03:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2778/7340 [97:33<160:12, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2778/7340 [97:34<160:13, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:53,888 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 38%|███████████████-------------------------| 2779/7340 [97:37<160:13, 28.5 steps/min]\u001b[92m17:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:03:57,094 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 129})\n",
+ " 38%|███████████████-------------------------| 2779/7340 [97:38<160:15, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:03:59,419 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:00,749 - agent.ComputerAgent - INFO - Computer: type({'text': 'Changes'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Changes'})\n",
+ " 38%|███████████████-------------------------| 2780/7340 [97:42<160:16, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:01,435 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:04:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2781/7340 [97:45<160:15, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:04,640 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:04:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2781/7340 [97:47<160:18, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:06,847 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:04:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2781/7340 [97:48<160:20, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2781/7340 [97:49<160:22, 28.4 steps/min]\u001b[92m17:04:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:08,608 - agent.ComputerAgent - INFO - Computer: click({'x': 717, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 717, 'y': 659})\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:51<160:19, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:53<160:22, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:56<160:27, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:15,201 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:04:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:57<160:28, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:58<160:30, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2782/7340 [97:59<160:32, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2782/7340 [98:01<160:35, 28.4 steps/min]\u001b[92m17:04:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:20,510 - agent.ComputerAgent - INFO - Computer: click({'x': 577, 'y': 519})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 577, 'y': 519})\n",
+ " 38%|███████████████-------------------------| 2783/7340 [98:07<160:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:26,831 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:04:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2783/7340 [98:08<160:42, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2783/7340 [98:12<160:48, 28.3 steps/min]\u001b[92m17:04:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:32,177 - agent.ComputerAgent - INFO - Computer: double_click({'x': 471, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 471, 'y': 206})\n",
+ " 38%|███████████████-------------------------| 2784/7340 [98:14<160:46, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2789/7340 [98:15<160:20, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/422a20c8-b318-46e4-9f06-d599c9ed261c/close \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2789/7340 [98:16<160:22, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2789/7340 [98:18<160:25, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:38,094 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:04:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2789/7340 [98:19<160:27, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2789/7340 [98:21<160:29, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]28.3 steps/min]\n",
+ "\u001b[92m17:04:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:48,899 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ " 38%|███████████████-------------------------| 2790/7340 [98:31<160:40, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:04:51,279 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 449})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 449})\n",
+ " 38%|███████████████-------------------------| 2790/7340 [98:33<160:43, 28.3 steps/min]\u001b[92m17:04:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:51,922 - agent.ComputerAgent - INFO - Computer: click({'x': 294, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 294, 'y': 77})\n",
+ " 38%|███████████████-------------------------| 2791/7340 [98:34<160:39, 28.3 steps/min]\u001b[92m17:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:53,617 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 245})\n",
+ " 38%|███████████████-------------------------| 2792/7340 [98:35<160:35, 28.3 steps/min]\u001b[92m17:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:04:54,273 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 549})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 549})\n",
+ " 38%|███████████████-------------------------| 2793/7340 [98:36<160:31, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:04:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:55,452 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 243})\n",
+ " 38%|███████████████-------------------------| 2794/7340 [98:37<160:27, 28.3 steps/min]2025-08-11 17:04:56,067 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:04:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2795/7340 [98:38<160:23, 28.3 steps/min]\u001b[92m17:04:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:57,268 - agent.ComputerAgent - INFO - Computer: click({'x': 509, 'y': 626})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 509, 'y': 626})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:04:57,899 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2795/7340 [98:39<160:26, 28.3 steps/min]\u001b[92m17:04:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:04:59,059 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 136})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2796/7340 [98:40<160:22, 28.3 steps/min]2025-08-11 17:04:59,738 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:00,396 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:05:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2797/7340 [98:42<160:19, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:01,075 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 251})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 251})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:01,739 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:05:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2797/7340 [98:43<160:21, 28.3 steps/min]2025-08-11 17:05:02,441 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 224})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 224})\n",
+ "2025-08-11 17:05:03,080 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ " 38%|███████████████-------------------------| 2798/7340 [98:44<160:17, 28.3 steps/min]\u001b[92m17:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:03,789 - agent.ComputerAgent - INFO - Computer: click({'x': 295, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 295, 'y': 111})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2799/7340 [98:45<160:13, 28.3 steps/min]2025-08-11 17:05:04,951 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:05:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:05,620 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -586, 'scroll_x': 0, 'x': 526, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -586, 'scroll_x': 0, 'x': 526, 'y': 617})\n",
+ " 38%|███████████████-------------------------| 2801/7340 [98:48<160:06, 28.3 steps/min]\u001b[92m17:05:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:07,323 - agent.ComputerAgent - INFO - Computer: click({'x': 418, 'y': 660})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 418, 'y': 660})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:07,966 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:05:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:09,317 - agent.ComputerAgent - INFO - Agent: I checked your Desktop directory, and it is empty. I also searched for common image types, and there are no photos available to edit. Without the photo, I can’t reduce its brightness.\n",
+ "\n",
+ "Please place the photo on your Desktop (or tell me its exact location and filename), and I will immediately tone down its brightness and save the updated version.\n",
+ "INFO:agent.ComputerAgent:Agent: I checked your Desktop directory, and it is empty. I also searched for common image types, and there are no photos available to edit. Without the photo, I can’t reduce its brightness.\n",
+ "\n",
+ "Please place the photo on your Desktop (or tell me its exact location and filename), and I will immediately tone down its brightness and save the updated version.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:09,970 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1807\n",
+ " - prompt_tokens: 13473\n",
+ " - total_tokens: 15280\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1728\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 12032\n",
+ " - response_cost: $0.0214\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1807\n",
+ " - prompt_tokens: 13473\n",
+ " - total_tokens: 15280\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1728\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 12032\n",
+ " - response_cost: $0.0214\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2802/7340 [98:51<160:06, 28.3 steps/min]\u001b[92m17:05:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:10,641 - agent.ComputerAgent - INFO - Computer: click({'x': 578, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 578, 'y': 430})\n",
+ "2025-08-11 17:05:11,327 - agent.ComputerAgent - INFO - Computer: click({'x': 896, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 896, 'y': 35})\n",
+ " 38%|███████████████-------------------------| 2803/7340 [98:53<160:03, 28.3 steps/min]2025-08-11 17:05:11,982 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:05:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2805/7340 [98:54<159:54, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:13,287 - agent.ComputerAgent - INFO - Computer: click({'x': 496, 'y': 587})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 496, 'y': 587})\n",
+ "2025-08-11 17:05:13,959 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:05:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:15,307 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 38%|███████████████-------------------------| 2805/7340 [98:57<159:58, 28.3 steps/min]2025-08-11 17:05:15,988 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 143})\n",
+ "2025-08-11 17:05:16,651 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:05:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2806/7340 [98:58<159:55, 28.4 steps/min]2025-08-11 17:05:17,685 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:05:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:05:18,341 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:05:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:19,685 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://discussions.flightaware.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://discussions.flightaware.com'})\n",
+ " 38%|███████████████-------------------------| 2807/7340 [99:01<159:54, 28.3 steps/min]2025-08-11 17:05:20,349 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:05:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2808/7340 [99:02<159:50, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:22,028 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:05:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2808/7340 [99:03<159:52, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2808/7340 [99:04<159:54, 28.3 steps/min]2025-08-11 17:05:23,737 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:05:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2808/7340 [99:05<159:56, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e5b48f23-7687-494c-b68c-ebdfc70d085f/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:26,375 - agent.ComputerAgent - INFO - Agent: I restored the Slides panel on the left in your LibreOffice Impress.\n",
+ "\n",
+ "If it disappears again:\n",
+ "- Go to the top menu: View > Slide Pane to toggle it back on.\n",
+ "- You can resize it by dragging its right edge. If you lose the right-side Properties panel, use View > Sidebar.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I restored the Slides panel on the left in your LibreOffice Impress.\n",
+ "\n",
+ "If it disappears again:\n",
+ "- Go to the top menu: View > Slide Pane to toggle it back on.\n",
+ "- You can resize it by dragging its right edge. If you lose the right-side Properties panel, use View > Sidebar.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 17:05:27,005 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 712\n",
+ " - prompt_tokens: 7355\n",
+ " - total_tokens: 8067\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0163\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 712\n",
+ " - prompt_tokens: 7355\n",
+ " - total_tokens: 8067\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0163\n",
+ " 38%|███████████████-------------------------| 2809/7340 [99:08<159:55, 28.3 steps/min]\u001b[92m17:05:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/025be48d-d757-4973-8c17-e42b8f6814b0/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:05:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:30,349 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 274})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 274})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2809/7340 [99:13<160:03, 28.3 steps/min]\u001b[92m17:05:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]2025-08-11 17:05:32,458 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:05:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:33,959 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:05:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2810/7340 [99:15<160:01, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:35,308 - agent.ComputerAgent - INFO - Computer: type({'text': 'anonym-x2024@gmail.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'anonym-x2024@gmail.com'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]28.3 steps/min]2025-08-11 17:05:35,927 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:05:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]28.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ " 38%|███████████████-------------------------| 2811/7340 [99:19<160:01, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:38,689 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:05:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2811/7340 [99:20<160:03, 28.3 steps/min]2025-08-11 17:05:39,319 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 38%|███████████████-------------------------| 2811/7340 [99:21<160:04, 28.3 steps/min]\u001b[92m17:05:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:05:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 38%|███████████████-------------------------| 2811/7340 [99:22<160:06, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:42,205 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m17:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2811/7340 [99:24<160:10, 28.3 steps/min]\u001b[92m17:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:43,512 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 52})\n",
+ "\u001b[92m17:05:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:44,155 - agent.ComputerAgent - INFO - Computer: double_click({'x': 193, 'y': 117})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 193, 'y': 117})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:05:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2812/7340 [99:26<160:07, 28.3 steps/min]\u001b[92m17:05:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:05:46,137 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:47,448 - agent.ComputerAgent - INFO - Computer: type({'text': 'Year\\tCA changes\\tFA changes\\tOA changes'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Year\\tCA changes\\tFA changes\\tOA changes'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:48,117 - agent.ComputerAgent - INFO - Computer: click({'x': 291, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 291, 'y': 406})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 38%|███████████████-------------------------| 2814/7340 [99:30<160:02, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:49,373 - agent.ComputerAgent - INFO - Computer: double_click({'x': 473, 'y': 458})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 473, 'y': 458})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:49,996 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:05:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:05:50,649 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:51,970 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m17:05:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2829/7340 [99:33<158:45, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:05:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:53,355 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 628, 'y': 367}, {'x': 554, 'y': 308}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 628, 'y': 367}, {'x': 554, 'y': 308}]})\n",
+ "2025-08-11 17:05:54,015 - agent.ComputerAgent - INFO - Computer: double_click({'x': 534, 'y': 754})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 534, 'y': 754})\n",
+ "\u001b[92m17:05:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 39%|███████████████-------------------------| 2831/7340 [99:36<158:38, 28.4 steps/min]\u001b[92m17:05:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:55,969 - agent.ComputerAgent - INFO - Computer: double_click({'x': 60, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 60, 'y': 141})\n",
+ "\u001b[92m17:05:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:56,618 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 217})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2833/7340 [99:38<158:30, 28.4 steps/min]\u001b[92m17:05:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:57,296 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:05:57,296 - agent.ComputerAgent - INFO - Computer: click({'x': 923, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 923, 'y': 332})\n",
+ " 39%|███████████████-------------------------| 2835/7340 [99:39<158:21, 28.4 steps/min]\u001b[92m17:05:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:05:57,937 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 350})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 350})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7cf040ac-2cba-40ae-8a67-0a2b3cfd2020/close \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2837/7340 [99:40<158:12, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:05:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:05:59,769 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 181})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2837/7340 [99:41<158:14, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2838/7340 [99:42<158:10, 28.5 steps/min]2025-08-11 17:06:01,457 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:06:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:43<157:39, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:06:02,648 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:06:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c3518cd0-0df6-44e9-8393-0c62002bc984/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:06:03,314 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:06:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:04,646 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:06:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:46<157:43, 28.5 steps/min]2025-08-11 17:06:05,326 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:06:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:06:05,975 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:47<157:45, 28.5 steps/min]2025-08-11 17:06:06,606 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:07,287 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:06:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:49<157:47, 28.5 steps/min]2025-08-11 17:06:07,906 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:06:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:08,557 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:06:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:09,226 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:06:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:51<157:51, 28.5 steps/min]2025-08-11 17:06:09,886 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:06:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:06:10,540 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:06:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:52<157:53, 28.5 steps/min]2025-08-11 17:06:11,188 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:06:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:11,821 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:06:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:53<157:55, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:55<157:57, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:57<158:00, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:58<158:02, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2844/7340 [99:59<158:04, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:06:19,050 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 39%|███████████████-------------------------| 2844/7340 [100:00<158:06, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m17:06:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:02<158:03, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:03<158:05, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it] 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:06<158:09, 28.4 steps/min]\u001b[92m17:06:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:07<158:11, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:26,695 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:06:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:08<158:13, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:06:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:27,350 - agent.ComputerAgent - INFO - Computer: click({'x': 346, 'y': 495})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 346, 'y': 495})\n",
+ "\u001b[92m17:06:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:28,014 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 361})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 361})\n",
+ " 39%|███████████████-------------------------| 2845/7340 [100:09<158:15, 28.4 steps/min]\u001b[92m17:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:28,678 - agent.ComputerAgent - INFO - Computer: click({'x': 708, 'y': 648})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 708, 'y': 648})\n",
+ "\u001b[92m17:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:29,341 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 178})\n",
+ "\u001b[92m17:06:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2847/7340 [100:11<158:06, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:30,005 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 65})\n",
+ " 39%|███████████████-------------------------| 2849/7340 [100:12<157:57, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:06:31,813 - agent.ComputerAgent - INFO - Agent: I restored your last closed tab (Tripadvisor). For future reference, you can press Ctrl+Shift+T to reopen the most recently closed tab.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I restored your last closed tab (Tripadvisor). For future reference, you can press Ctrl+Shift+T to reopen the most recently closed tab.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 17:06:32,488 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 361\n",
+ " - prompt_tokens: 4662\n",
+ " - total_tokens: 5023\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0094\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 361\n",
+ " - prompt_tokens: 4662\n",
+ " - total_tokens: 5023\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0094\n",
+ " 39%|███████████████-------------------------| 2851/7340 [100:14<157:49, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2851/7340 [100:15<157:51, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:06:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:34,920 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 271})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:36,659 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2851/7340 [100:19<157:57, 28.4 steps/min]2025-08-11 17:06:37,964 - agent.ComputerAgent - INFO - Computer: click({'x': 583, 'y': 582})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 583, 'y': 582})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:38,598 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:06:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:06:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:06:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2852/7340 [100:20<157:54, 28.4 steps/min]2025-08-11 17:06:39,883 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "2025-08-11 17:06:40,508 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:06:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:41,198 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2869/7340 [100:22<156:26, 28.6 steps/min]\u001b[92m17:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:41,848 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:42,511 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "2025-08-11 17:06:43,157 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:06:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2870/7340 [100:24<156:23, 28.6 steps/min]2025-08-11 17:06:44,177 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:06:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:06:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2871/7340 [100:27<156:21, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:06:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:06:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/687e10e4-fe9c-4767-a255-77d9b553a724/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 39%|███████████████-------------------------| 2871/7340 [100:28<156:23, 28.6 steps/min]\u001b[92m17:06:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:47,309 - agent.ComputerAgent - INFO - Computer: click({'x': 711, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 711, 'y': 176})\n",
+ "\u001b[92m17:06:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:47,989 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 650, 'y': 368}, {'x': 528, 'y': 275}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 650, 'y': 368}, {'x': 528, 'y': 275}]})\n",
+ " 39%|███████████████-------------------------| 2871/7340 [100:29<156:25, 28.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<156:16, 28.6 steps/min]2025-08-11 17:06:49,772 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:06:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:06:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it] 28.6 steps/min]2025-08-11 17:06:51,288 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:06:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:06:51,921 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:06:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2873/7340 [100:33<156:21, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]\u001b[92m17:06:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2873/7340 [100:35<156:23, 28.6 steps/min]2025-08-11 17:06:54,378 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:06:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it] 28.6 steps/min]\n",
+ "2025-08-11 17:06:55,440 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:06:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2873/7340 [100:37<156:27, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 39%|███████████████-------------------------| 2873/7340 [100:38<156:28, 28.5 steps/min]\u001b[92m17:06:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:57,280 - agent.ComputerAgent - INFO - Computer: click({'x': 564, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 564, 'y': 77})\n",
+ "\u001b[92m17:06:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:06:57,987 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 572})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 572})\n",
+ " 39%|███████████████-------------------------| 2873/7340 [100:39<156:30, 28.5 steps/min]\u001b[92m17:06:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:06:58,637 - agent.ComputerAgent - INFO - Computer: click({'x': 231, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 231, 'y': 129})\n",
+ " 39%|███████████████-------------------------| 2875/7340 [100:40<156:21, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:06:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 39%|███████████████-------------------------| 2876/7340 [100:41<156:17, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:01,145 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 199})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2876/7340 [100:42<156:19, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:01,815 - agent.ComputerAgent - INFO - Computer: click({'x': 466, 'y': 394})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 466, 'y': 394})\n",
+ " 39%|███████████████-------------------------| 2878/7340 [100:44<156:11, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:07:03,970 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:07:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2878/7340 [100:45<156:13, 28.6 steps/min]2025-08-11 17:07:05,108 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:07:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:07:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2878/7340 [100:47<156:16, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:07:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:06,974 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 206})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:07:08,334 - agent.ComputerAgent - INFO - Computer: type({'text': '2015\\n2016\\n2017\\n2018\\n2019'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '2015\\n2016\\n2017\\n2018\\n2019'})\n",
+ " 39%|███████████████-------------------------| 2878/7340 [100:50<156:19, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2882/7340 [100:51<156:00, 28.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:10,279 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:07:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:07:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:10,943 - agent.ComputerAgent - INFO - Computer: click({'x': 482, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 482, 'y': 440})\n",
+ "\u001b[92m17:07:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f73836c4-d8e3-425b-a750-f2319c89164e/close \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2882/7340 [100:52<156:02, 28.6 steps/min]2025-08-11 17:07:11,583 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 251})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 251})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/reset \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2883/7340 [100:54<155:59, 28.6 steps/min]2025-08-11 17:07:12,912 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:07:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2884/7340 [100:55<155:55, 28.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2884/7340 [100:56<155:57, 28.6 steps/min]2025-08-11 17:07:15,299 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:07:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:07:15,939 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:07:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:07:17,282 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ " 39%|███████████████-------------------------| 2884/7340 [100:59<156:01, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 17:07:19,468 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:07:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:07:20,148 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 39%|███████████████-------------------------| 2884/7340 [101:01<156:06, 28.5 steps/min]\u001b[92m17:07:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]\u001b[92m17:07:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 28.5 steps/min]\n",
+ "2025-08-11 17:07:21,753 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:07:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:07:22,593 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:07:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2884/7340 [101:05<156:10, 28.5 steps/min]\u001b[92m17:07:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 39%|███████████████-------------------------| 2884/7340 [101:06<156:12, 28.5 steps/min]\u001b[92m17:07:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:07:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:25,484 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 392})\n",
+ " 39%|███████████████-------------------------| 2884/7340 [101:07<156:14, 28.5 steps/min]\u001b[92m17:07:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:26,161 - agent.ComputerAgent - INFO - Computer: click({'x': 600, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 600, 'y': 559})\n",
+ "\u001b[92m17:07:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:26,823 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2885/7340 [101:09<156:12, 28.5 steps/min]\u001b[92m17:07:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:28,108 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 199})\n",
+ "2025-08-11 17:07:28,744 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 623, 'y': 418}, {'x': 517, 'y': 392}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 623, 'y': 418}, {'x': 517, 'y': 392}]})\n",
+ " 39%|███████████████-------------------------| 2887/7340 [101:10<156:03, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:07:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:29,901 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -6707, 'scroll_x': 0, 'x': 989, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -6707, 'scroll_x': 0, 'x': 989, 'y': 420})\n",
+ " 39%|███████████████-------------------------| 2890/7340 [101:14<155:53, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:34,249 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:07:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:07:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:35,599 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:07:35,600 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 39%|███████████████-------------------------| 2891/7340 [101:17<155:52, 28.5 steps/min]2025-08-11 17:07:36,252 - agent.ComputerAgent - INFO - Computer: click({'x': 124, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 124, 'y': 177})\n",
+ "2025-08-11 17:07:36,883 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:07:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:07:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2892/7340 [101:19<155:51, 28.5 steps/min]\u001b[92m17:07:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:38,830 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:07:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:07:39,509 - agent.ComputerAgent - INFO - Computer: double_click({'x': 986, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 986, 'y': 574})\n",
+ "\u001b[92m17:07:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2893/7340 [101:21<155:47, 28.5 steps/min]2025-08-11 17:07:40,127 - agent.ComputerAgent - INFO - Computer: click({'x': 59, 'y': 78})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 59, 'y': 78})\n",
+ "2025-08-11 17:07:40,785 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:07:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2894/7340 [101:22<155:44, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6fcb07bb-6857-4888-82a0-1fd0dbf2d722/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2895/7340 [101:23<155:41, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2895/7340 [101:25<155:42, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2895/7340 [101:26<155:44, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it] 28.5 steps/min]2025-08-11 17:07:45,961 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:07:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2895/7340 [101:28<155:47, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]\u001b[92m17:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it] 28.5 steps/min]\n",
+ "2025-08-11 17:07:48,311 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:07:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:07:49,000 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:07:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 39%|███████████████-------------------------| 2897/7340 [101:30<155:41, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:07:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 39%|███████████████-------------------------| 2897/7340 [101:31<155:42, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:50,887 - agent.ComputerAgent - INFO - Computer: double_click({'x': 483, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 483, 'y': 392})\n",
+ "\u001b[92m17:07:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:07:51,533 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:07:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:07:52,219 - agent.ComputerAgent - INFO - Computer: click({'x': 623, 'y': 494})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 623, 'y': 494})\n",
+ "\u001b[92m17:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fed9747f-6005-4d29-b83e-afc7934c0ff5/close \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2897/7340 [101:33<155:46, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:07:52,854 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 281})\n",
+ "2025-08-11 17:07:54,183 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 334})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 39%|███████████████-------------------------| 2899/7340 [101:35<155:38, 28.5 steps/min]2025-08-11 17:07:54,839 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:07:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:07:56,184 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2901/7340 [101:37<155:30, 28.5 steps/min]2025-08-11 17:07:57,348 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2901/7340 [101:39<155:32, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:07:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2901/7340 [101:41<155:36, 28.5 steps/min]\u001b[92m17:07:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:00,382 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:08:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:01,240 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]\u001b[92m17:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2901/7340 [101:43<155:38, 28.5 steps/min]2025-08-11 17:08:01,928 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]2025-08-11 17:08:03,645 - agent.ComputerAgent - INFO - Computer: type({'text': '=(INDEX(Sheet1.$B:$B; MATCH(A2; Sheet1.$A:$A; 0)) - INDEX(Sheet1.$B:$B; MATCH(A2-1; Sheet1.$A:$A; 0))) / INDEX(Sheet1.$B:$B; MATCH(A2-1; Sheet1.$A:$A; 0))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=(INDEX(Sheet1.$B:$B; MATCH(A2; Sheet1.$A:$A; 0)) - INDEX(Sheet1.$B:$B; MATCH(A2-1; Sheet1.$A:$A; 0))) / INDEX(Sheet1.$B:$B; MATCH(A2-1; Sheet1.$A:$A; 0))'})\n",
+ "2025-08-11 17:08:04,453 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:08:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 28.5 steps/min]\n",
+ "2025-08-11 17:08:05,167 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:08:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2902/7340 [101:48<155:41, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:08:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:08,035 - agent.ComputerAgent - INFO - Computer: click({'x': 887, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 887, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:09,352 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ " 40%|███████████████-------------------------| 2902/7340 [101:51<155:45, 28.5 steps/min]2025-08-11 17:08:10,012 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "\u001b[92m17:08:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:08:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:10,691 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 715, 'y': 648})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 715, 'y': 648})\n",
+ "2025-08-11 17:08:11,318 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 442})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 442})\n",
+ "2025-08-11 17:08:11,968 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2903/7340 [101:53<155:44, 28.5 steps/min]2025-08-11 17:08:12,670 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2906/7340 [101:55<155:31, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|███████████████-------------------------| 2906/7340 [101:56<155:33, 28.5 steps/min]\u001b[92m17:08:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:16,030 - agent.ComputerAgent - INFO - Computer: click({'x': 877, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 877, 'y': 124})\n",
+ " 40%|███████████████-------------------------| 2906/7340 [101:57<155:34, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:17,842 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:08:17,843 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2907/7340 [101:59<155:31, 28.5 steps/min]2025-08-11 17:08:18,520 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:08:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:19,845 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:08:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2907/7340 [102:01<155:35, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:20,523 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:08:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:08:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:21,185 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:22,544 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ " 40%|███████████████-------------------------| 2907/7340 [102:04<155:39, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:23,872 - agent.ComputerAgent - INFO - Computer: type({'text': 'anonym-x2024@gmail.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'anonym-x2024@gmail.com'})\n",
+ "\u001b[92m17:08:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2908/7340 [102:05<155:35, 28.5 steps/min]2025-08-11 17:08:24,499 - agent.ComputerAgent - INFO - Computer: click({'x': 467, 'y': 587})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 467, 'y': 587})\n",
+ "2025-08-11 17:08:25,131 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:08:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2909/7340 [102:06<155:32, 28.5 steps/min]2025-08-11 17:08:26,131 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:08:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2910/7340 [102:08<155:28, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:28,659 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2910/7340 [102:11<155:33, 28.5 steps/min]\u001b[92m17:08:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:30,362 - agent.ComputerAgent - INFO - Computer: click({'x': 623, 'y': 201})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 623, 'y': 201})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:30,987 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/091ec079-295e-4528-bad5-f34604d013c2/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:32,334 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2910/7340 [102:14<155:38, 28.5 steps/min]\u001b[92m17:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:32,992 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:08:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:08:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:34,290 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2911/7340 [102:16<155:36, 28.5 steps/min]2025-08-11 17:08:35,622 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 700}, {'x': 991, 'y': 420}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 700}, {'x': 991, 'y': 420}]})\n",
+ "\u001b[92m17:08:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:36,308 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:08:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:08:36,982 - agent.ComputerAgent - INFO - Computer: click({'x': 862, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 862, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2911/7340 [102:19<155:40, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:38,280 - agent.ComputerAgent - INFO - Computer: click({'x': 226, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 226, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:08:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:40,258 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 40%|███████████████-------------------------| 2913/7340 [102:21<155:34, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:40,892 - agent.ComputerAgent - INFO - Computer: double_click({'x': 735, 'y': 648})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 735, 'y': 648})\n",
+ "\u001b[92m17:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:41,547 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 444})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 444})\n",
+ " 40%|███████████████-------------------------| 2915/7340 [102:23<155:25, 28.5 steps/min]2025-08-11 17:08:42,204 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:08:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:42,879 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:08:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2917/7340 [102:24<155:16, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:44,040 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:08:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2917/7340 [102:25<155:18, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:45,374 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2917/7340 [102:27<155:21, 28.5 steps/min]\u001b[92m17:08:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:08:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:47,380 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:49,331 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:08:49,332 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2917/7340 [102:31<155:26, 28.5 steps/min]2025-08-11 17:08:49,999 - agent.ComputerAgent - INFO - Computer: click({'x': 102, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 102, 'y': 238})\n",
+ "\u001b[92m17:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:50,634 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "\u001b[92m17:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:08:51,270 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:08:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2918/7340 [102:33<155:24, 28.5 steps/min]2025-08-11 17:08:51,946 - agent.ComputerAgent - INFO - Computer: click({'x': 735, 'y': 613})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 735, 'y': 613})\n",
+ "2025-08-11 17:08:52,589 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:08:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:08:53,232 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:08:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2920/7340 [102:35<155:16, 28.5 steps/min]2025-08-11 17:08:54,266 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:08:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:08:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2921/7340 [102:36<155:14, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:08:55,601 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:08:56,945 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:08:57,599 - agent.ComputerAgent - INFO - Computer: click({'x': 646, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 646, 'y': 278})\n",
+ " 40%|███████████████-------------------------| 2921/7340 [102:39<155:18, 28.5 steps/min]2025-08-11 17:08:58,241 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:08:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2923/7340 [102:41<155:10, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2923/7340 [102:42<155:12, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:02,771 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ " 40%|███████████████-------------------------| 2923/7340 [102:44<155:15, 28.4 steps/min]\u001b[92m17:09:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:09:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:03,449 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 571})\n",
+ "\u001b[92m17:09:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:04,103 - agent.ComputerAgent - INFO - Computer: double_click({'x': 482, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 482, 'y': 392})\n",
+ " 40%|███████████████-------------------------| 2923/7340 [102:45<155:17, 28.4 steps/min]2025-08-11 17:09:04,711 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:05,370 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:09:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2925/7340 [102:47<155:09, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:06,706 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:09:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:07,361 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:09:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:09:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2925/7340 [102:49<155:11, 28.4 steps/min]2025-08-11 17:09:08,009 - agent.ComputerAgent - INFO - Computer: click({'x': 42, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 42, 'y': 93})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|███████████████-------------------------| 2925/7340 [102:51<155:14, 28.4 steps/min]\u001b[92m17:09:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:09,964 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:10,617 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 176})\n",
+ "\u001b[92m17:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2926/7340 [102:52<155:11, 28.4 steps/min]2025-08-11 17:09:11,301 - agent.ComputerAgent - INFO - Computer: click({'x': 736, 'y': 648})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 736, 'y': 648})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2927/7340 [102:53<155:08, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:12,641 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:13,945 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 458})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 458})\n",
+ " 40%|███████████████-------------------------| 2928/7340 [102:55<155:05, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:09:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2929/7340 [102:56<155:02, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:15,764 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 700}, {'x': 991, 'y': 149}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 700}, {'x': 991, 'y': 149}]})\n",
+ "\u001b[92m17:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:16,434 - agent.ComputerAgent - INFO - Computer: click({'x': 262, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 262, 'y': 298})\n",
+ " 40%|███████████████-------------------------| 2929/7340 [102:58<155:04, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:17,081 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:09:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|███████████████-------------------------| 2931/7340 [103:00<154:56, 28.5 steps/min]\u001b[92m17:09:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:19,022 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 15, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 15, 'y': 525})\n",
+ "\u001b[92m17:09:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:19,692 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 404})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2931/7340 [103:01<154:58, 28.4 steps/min]2025-08-11 17:09:20,341 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:09:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:21,012 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:09:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|███████████████-------------------------| 2933/7340 [103:02<154:50, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:21,661 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:09:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2933/7340 [103:04<154:52, 28.5 steps/min]\u001b[92m17:09:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:24,272 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:09:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:24,961 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:09:24,962 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 19, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 19, 'y': 44})\n",
+ "\u001b[92m17:09:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|███████████████-------------------------| 2933/7340 [103:06<154:55, 28.4 steps/min]\u001b[92m17:09:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:25,649 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 298})\n",
+ "2025-08-11 17:09:26,316 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:09:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:26,988 - agent.ComputerAgent - INFO - Computer: click({'x': 102, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 102, 'y': 211})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|███████████████-------------------------| 2934/7340 [103:08<154:53, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:27,641 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:09:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:28,302 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:09:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:10<154:45, 28.5 steps/min]2025-08-11 17:09:28,933 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:09:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:12<154:48, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f0190121-650c-4779-b26d-2480f313dc84/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/reset \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:13<154:49, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:32,112 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:09:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:14<154:51, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:33,793 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:09:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:15<154:53, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:34,459 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:09:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:16<154:55, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:09:35,792 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:09:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:36,437 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 203})\n",
+ " 40%|████████████████------------------------| 2936/7340 [103:18<154:57, 28.4 steps/min]2025-08-11 17:09:37,084 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:09:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:37,743 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:09:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2937/7340 [103:21<154:56, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2937/7340 [103:22<154:58, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:42,146 - agent.ComputerAgent - INFO - Computer: click({'x': 835, 'y': 638})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 835, 'y': 638})\n",
+ " 40%|████████████████------------------------| 2937/7340 [103:23<155:00, 28.4 steps/min]2025-08-11 17:09:42,843 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:09:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:09:43,513 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:09:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2938/7340 [103:26<154:58, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:46,509 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:48,504 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ " 40%|████████████████------------------------| 2938/7340 [103:30<155:04, 28.4 steps/min]\u001b[92m17:09:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:50,485 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:09:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:51,139 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 88})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:53,063 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo $SHELL\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo $SHELL\\n'})\n",
+ " 40%|████████████████------------------------| 2939/7340 [103:34<155:06, 28.4 steps/min]2025-08-11 17:09:53,723 - agent.ComputerAgent - INFO - Computer: click({'x': 651, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 651, 'y': 237})\n",
+ "\u001b[92m17:09:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:55,679 - agent.ComputerAgent - INFO - Computer: click({'x': 727, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 727, 'y': 227})\n",
+ "2025-08-11 17:09:56,312 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:09:56,313 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:09:58,301 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:09:58,302 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ " 40%|████████████████------------------------| 2941/7340 [103:40<155:03, 28.4 steps/min]2025-08-11 17:09:58,936 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:09:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:09:59,616 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:09:59,617 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 281, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 281, 'y': 108})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:00,970 - agent.ComputerAgent - INFO - Agent: I’ve turned off Dark Mode in Google Chrome by setting the browser’s appearance to Light:\n",
+ "- Settings > Appearance > Mode = Light\n",
+ "\n",
+ "This forces Chrome to use a light theme regardless of your device theme.\n",
+ "\n",
+ "If any websites still appear dark, it could be due to:\n",
+ "- A dark-mode extension (e.g., Dark Reader). Disable it under Extensions.\n",
+ "- The experimental flag “Auto Dark Mode for Web Contents” being enabled. Visit chrome://flags and ensure it’s Disabled.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I’ve turned off Dark Mode in Google Chrome by setting the browser’s appearance to Light:\n",
+ "- Settings > Appearance > Mode = Light\n",
+ "\n",
+ "This forces Chrome to use a light theme regardless of your device theme.\n",
+ "\n",
+ "If any websites still appear dark, it could be due to:\n",
+ "- A dark-mode extension (e.g., Dark Reader). Disable it under Extensions.\n",
+ "- The experimental flag “Auto Dark Mode for Web Contents” being enabled. Visit chrome://flags and ensure it’s Disabled.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:10:01,595 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 876\n",
+ " - prompt_tokens: 8770\n",
+ " - total_tokens: 9646\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0197\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 876\n",
+ " - prompt_tokens: 8770\n",
+ " - total_tokens: 9646\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0197\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:02,937 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 77})\n",
+ "2025-08-11 17:10:03,606 - agent.ComputerAgent - INFO - Computer: double_click({'x': 476, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 476, 'y': 392})\n",
+ "\u001b[92m17:10:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:04,225 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 40%|████████████████------------------------| 2945/7340 [103:45<154:51, 28.4 steps/min]\u001b[92m17:10:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:04,897 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 578})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 578})\n",
+ "\u001b[92m17:10:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|████████████████------------------------| 2948/7340 [103:46<154:37, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:06,706 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 143, 'y': 188}, {'x': 147, 'y': 229}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 143, 'y': 188}, {'x': 147, 'y': 229}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2949/7340 [103:49<154:35, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:08,028 - agent.ComputerAgent - INFO - Computer: double_click({'x': 730, 'y': 648})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 730, 'y': 648})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|████████████████------------------------| 2950/7340 [103:50<154:31, 28.4 steps/min]\u001b[92m17:10:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:09,210 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 167})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2960/7340 [103:51<153:40, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:10,343 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:10:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f141a0f-f4b0-4f99-b4c4-5217b268c96b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:11,005 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:10:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2961/7340 [103:54<153:39, 28.5 steps/min]2025-08-11 17:10:12,983 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:10:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<153:41, 28.5 steps/min]2025-08-11 17:10:14,314 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:14,954 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2961/7340 [103:56<153:43, 28.5 steps/min]2025-08-11 17:10:15,624 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:10:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]2025-08-11 17:10:16,302 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:10:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:16,965 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:10:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]\u001b[92m17:10:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2961/7340 [103:59<153:47, 28.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 17:10:19,255 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 28.5 steps/min]\n",
+ "2025-08-11 17:10:19,891 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:20,643 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:10:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 40%|████████████████------------------------| 2961/7340 [104:02<153:52, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:10:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2961/7340 [104:03<153:54, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:22,777 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 387})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 387})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:23,455 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|████████████████------------------------| 2961/7340 [104:05<153:56, 28.4 steps/min]\u001b[92m17:10:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:24,794 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 182})\n",
+ "2025-08-11 17:10:25,436 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 131})\n",
+ "2025-08-11 17:10:26,087 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:27,398 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~\\nwget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~\\nwget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\\n'})\n",
+ "\u001b[92m17:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:28,022 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 363})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 363})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:29,338 - agent.ComputerAgent - INFO - Computer: type({'text': 'spider-man toys'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'spider-man toys'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2962/7340 [104:11<154:00, 28.4 steps/min]\u001b[92m17:10:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:30,620 - agent.ComputerAgent - INFO - Computer: click({'x': 551, 'y': 740})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 551, 'y': 740})\n",
+ "2025-08-11 17:10:31,277 - agent.ComputerAgent - INFO - Computer: click({'x': 184, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 184, 'y': 164})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 40%|████████████████------------------------| 2968/7340 [104:13<153:31, 28.5 steps/min]2025-08-11 17:10:31,954 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:10:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:10:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:32,655 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 610})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 610})\n",
+ " 40%|████████████████------------------------| 2971/7340 [104:16<153:20, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:36,533 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://www.google.com/preferences?hl=en'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://www.google.com/preferences?hl=en'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2971/7340 [104:18<153:23, 28.5 steps/min]2025-08-11 17:10:37,155 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:10:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:37,775 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:10:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:38,431 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ " 40%|████████████████------------------------| 2972/7340 [104:21<153:22, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:10:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:41,102 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:41,752 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2972/7340 [104:23<153:25, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:42,443 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:43,091 - agent.ComputerAgent - INFO - Computer: double_click({'x': 745, 'y': 646})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 745, 'y': 646})\n",
+ "2025-08-11 17:10:44,133 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 177})\n",
+ "\u001b[92m17:10:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 40%|████████████████------------------------| 2972/7340 [104:25<153:29, 28.5 steps/min]2025-08-11 17:10:44,804 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:10:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:45,480 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 234})\n",
+ "2025-08-11 17:10:46,127 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:47,450 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://discussions.flightaware.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://discussions.flightaware.com'})\n",
+ " 41%|████████████████------------------------| 2974/7340 [104:29<153:23, 28.5 steps/min]2025-08-11 17:10:48,102 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:48,756 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 2976/7340 [104:30<153:15, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:50,112 - agent.ComputerAgent - INFO - Computer: type({'text': 'bash miniconda.sh -b -p $HOME/miniconda\\n$HOME/miniconda/bin/conda init bash\\nsource ~/.bashrc\\nconda --version\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'bash miniconda.sh -b -p $HOME/miniconda\\n$HOME/miniconda/bin/conda init bash\\nsource ~/.bashrc\\nconda --version\\n'})\n",
+ " 41%|████████████████------------------------| 2977/7340 [104:32<153:13, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:51,773 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:10:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:53,133 - agent.ComputerAgent - INFO - Agent: Setup complete.\n",
+ "\n",
+ "- Created a Thunderbird Message Filter for account anonym-x2024@outlook.com.\n",
+ "- Filter name: “Forward to anonym-x2024@gmail.com”.\n",
+ "- Trigger: Getting New Mail (and Manual Run).\n",
+ "- Condition: Match all messages.\n",
+ "- Action: Forward Message to anonym-x2024@gmail.com.\n",
+ "- It’s configured “Before Junk Classification,” so every incoming email (including junk) will be forwarded.\n",
+ "- No changes were made to the online/remote account—this is local to Thunderbird. Note that Thunderbird must be running and connected for automatic forwarding to occur.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Setup complete.\n",
+ "\n",
+ "- Created a Thunderbird Message Filter for account anonym-x2024@outlook.com.\n",
+ "- Filter name: “Forward to anonym-x2024@gmail.com”.\n",
+ "- Trigger: Getting New Mail (and Manual Run).\n",
+ "- Condition: Match all messages.\n",
+ "- Action: Forward Message to anonym-x2024@gmail.com.\n",
+ "- It’s configured “Before Junk Classification,” so every incoming email (including junk) will be forwarded.\n",
+ "- No changes were made to the online/remote account—this is local to Thunderbird. Note that Thunderbird must be running and connected for automatic forwarding to occur.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:10:53,794 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 834\n",
+ " - prompt_tokens: 14908\n",
+ " - total_tokens: 15742\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6656\n",
+ " - response_cost: $0.0195\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 834\n",
+ " - prompt_tokens: 14908\n",
+ " - total_tokens: 15742\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6656\n",
+ " - response_cost: $0.0195\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:55,112 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2978/7340 [104:36<153:13, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:10:55,778 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:10:56,454 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:10:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:10:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2979/7340 [104:38<153:11, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:10:58,460 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2979/7340 [104:40<153:14, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:10:59,793 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:11:00,470 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 89})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:02,417 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 41%|████████████████------------------------| 2979/7340 [104:44<153:19, 28.4 steps/min]\u001b[92m17:11:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:04,460 - agent.ComputerAgent - INFO - Computer: type({'text': 'Profit'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Profit'})\n",
+ "2025-08-11 17:11:05,123 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 178})\n",
+ "\u001b[92m17:11:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:06,546 - agent.ComputerAgent - INFO - Agent: I proceeded to access your Google Drive to get the new invoice, but I’m blocked by Google’s sign‑in and 2‑Step Verification for the account on this machine. “Get help” leads to “Couldn’t sign you in. Contact your domain admin,” so I can’t reach the file. I also checked local folders (Desktop, Documents, Downloads) and there’s no invoice saved locally.\n",
+ "\n",
+ "This prevents completion because I cannot open the invoice to extract the table.\n",
+ "\n",
+ "Provide one of the following so I can finish the task end‑to‑end immediately:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Drive, or\n",
+ "- Upload the invoice file directly here, or\n",
+ "- A valid 2FA code (or backup code) for the Google account, or\n",
+ "- A public link from another storage (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access, I will extract the table contents and export them into a LibreOffice Calc .xlsx on your Desktop.\n",
+ "INFO:agent.ComputerAgent:Agent: I proceeded to access your Google Drive to get the new invoice, but I’m blocked by Google’s sign‑in and 2‑Step Verification for the account on this machine. “Get help” leads to “Couldn’t sign you in. Contact your domain admin,” so I can’t reach the file. I also checked local folders (Desktop, Documents, Downloads) and there’s no invoice saved locally.\n",
+ "\n",
+ "This prevents completion because I cannot open the invoice to extract the table.\n",
+ "\n",
+ "Provide one of the following so I can finish the task end‑to‑end immediately:\n",
+ "- A shareable “Anyone with the link can view” URL to the invoice in Drive, or\n",
+ "- Upload the invoice file directly here, or\n",
+ "- A valid 2FA code (or backup code) for the Google account, or\n",
+ "- A public link from another storage (Dropbox/OneDrive/etc.).\n",
+ "\n",
+ "Once I have access, I will extract the table contents and export them into a LibreOffice Calc .xlsx on your Desktop.\n",
+ "2025-08-11 17:11:07,265 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 917\n",
+ " - prompt_tokens: 12081\n",
+ " - total_tokens: 12998\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4864\n",
+ " - response_cost: $0.0188\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 917\n",
+ " - prompt_tokens: 12081\n",
+ " - total_tokens: 12998\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4864\n",
+ " - response_cost: $0.0188\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2982/7340 [104:49<153:10, 28.4 steps/min]2025-08-11 17:11:08,015 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:11:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:11:08,812 - agent.ComputerAgent - INFO - Computer: click({'x': 337, 'y': 364})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 337, 'y': 364})\n",
+ "\u001b[92m17:11:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:10,125 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:11:10,773 - agent.ComputerAgent - INFO - Computer: click({'x': 765, 'y': 611})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 765, 'y': 611})\n",
+ " 41%|████████████████------------------------| 2984/7340 [104:52<153:05, 28.5 steps/min]2025-08-11 17:11:11,459 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:11:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 2987/7340 [104:53<152:51, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:12,615 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:11:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 2987/7340 [104:54<152:53, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2987/7340 [104:55<152:54, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:14,962 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 986, 'y': 578})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 986, 'y': 578})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2987/7340 [104:57<152:57, 28.5 steps/min]\u001b[92m17:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:16,211 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:16,890 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:11:18,032 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:18,662 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:20,649 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2988/7340 [105:02<152:59, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:21,989 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:11:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:11:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:22,670 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:11:22,670 - agent.ComputerAgent - INFO - Computer: click({'x': 148, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 148, 'y': 182})\n",
+ " 41%|████████████████------------------------| 2990/7340 [105:04<152:51, 28.5 steps/min]2025-08-11 17:11:23,329 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:11:24,019 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:25,305 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:11:25,955 - agent.ComputerAgent - INFO - Computer: click({'x': 823, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 823, 'y': 232})\n",
+ " 41%|████████████████------------------------| 2991/7340 [105:07<152:51, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:27,235 - agent.ComputerAgent - INFO - Computer: type({'text': '=(INDEX(Sheet1.$C:$C; MATCH(A2; Sheet1.$A:$A; 0)) - INDEX(Sheet1.$C:$C; MATCH(A2-1; Sheet1.$A:$A; 0))) / INDEX(Sheet1.$C:$C; MATCH(A2-1; Sheet1.$A:$A; 0))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=(INDEX(Sheet1.$C:$C; MATCH(A2; Sheet1.$A:$A; 0)) - INDEX(Sheet1.$C:$C; MATCH(A2-1; Sheet1.$A:$A; 0))) / INDEX(Sheet1.$C:$C; MATCH(A2-1; Sheet1.$A:$A; 0))'})\n",
+ " 41%|████████████████------------------------| 2993/7340 [105:09<152:43, 28.5 steps/min]2025-08-11 17:11:27,879 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:29,586 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:30,896 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/797f1798-0199-4d66-a503-1c5a8d488911/close \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2994/7340 [105:12<152:43, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2995/7340 [105:13<152:39, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:33,340 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<152:41, 28.5 steps/min]2025-08-11 17:11:34,028 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:11:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it] 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:37,330 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:11:38,237 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:11:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 17:11:38,885 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:11:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 2995/7340 [105:21<152:50, 28.4 steps/min]\u001b[92m17:11:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:41,408 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda create -n hf python=3.11 -y\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda create -n hf python=3.11 -y\\n'})\n",
+ "2025-08-11 17:11:42,030 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2995/7340 [105:24<152:55, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:43,350 - agent.ComputerAgent - INFO - Computer: click({'x': 184, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 184, 'y': 178})\n",
+ "2025-08-11 17:11:43,971 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 308})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 308})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:45,294 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "\u001b[92m17:11:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 2996/7340 [105:27<152:53, 28.4 steps/min]2025-08-11 17:11:45,963 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 163})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:47,232 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:11:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3004/7340 [105:28<152:15, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:47,899 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:11:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:11:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:49,879 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 41%|████████████████------------------------| 3005/7340 [105:31<152:13, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:11:51,227 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 432})\n",
+ "\u001b[92m17:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3009/7340 [105:32<151:55, 28.5 steps/min]2025-08-11 17:11:51,863 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 125, 'y': 183}, {'x': 125, 'y': 287}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 125, 'y': 183}, {'x': 125, 'y': 287}]})\n",
+ "\u001b[92m17:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:11:52,505 - agent.ComputerAgent - INFO - Computer: click({'x': 886, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 886, 'y': 234})\n",
+ " 41%|████████████████------------------------| 3010/7340 [105:34<151:52, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:11:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88706cb5-896e-4bf5-8b52-5df252945e00/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d2c8050e-87aa-487a-9555-884be2298ade/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 41%|████████████████------------------------| 3012/7340 [105:36<151:44, 28.5 steps/min]\u001b[92m17:11:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:11:57,195 - agent.ComputerAgent - INFO - Computer: click({'x': 335, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 335, 'y': 64})\n",
+ "\u001b[92m17:11:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<151:48, 28.5 steps/min]2025-08-11 17:11:58,064 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ "2025-08-11 17:11:58,723 - agent.ComputerAgent - INFO - Computer: click({'x': 853, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 853, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3013/7340 [105:40<151:45, 28.5 steps/min]2025-08-11 17:11:59,598 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m17:11:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:00,284 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:12:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:01,185 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:12:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:01,870 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:12:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:02,761 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:12:04,215 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3015/7340 [105:45<151:43, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:05,577 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "2025-08-11 17:12:06,202 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:12:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:12:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3016/7340 [105:48<151:41, 28.5 steps/min]2025-08-11 17:12:06,870 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:12:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:07,557 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:08,242 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 215})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:09,573 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3016/7340 [105:51<151:45, 28.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:11,285 - agent.ComputerAgent - INFO - Computer: type({'text': '=B2-C2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=B2-C2'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6b818afb-eeae-43f9-b1ff-7f82844997e2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3018/7340 [105:53<151:37, 28.5 steps/min]2025-08-11 17:12:12,593 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:12:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3019/7340 [105:54<151:34, 28.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3019/7340 [105:55<151:36, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:14,811 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:12:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3019/7340 [105:56<151:37, 28.5 steps/min]2025-08-11 17:12:15,476 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:12:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:16,171 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3019/7340 [105:57<151:39, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:17,470 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:12:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:12:18,807 - agent.ComputerAgent - INFO - Computer: type({'text': 'Blog'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Blog'})\n",
+ " 41%|████████████████------------------------| 3019/7340 [106:00<151:43, 28.5 steps/min]2025-08-11 17:12:19,453 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:12:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]2025-08-11 17:12:20,143 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3020/7340 [106:02<151:40, 28.5 steps/min]2025-08-11 17:12:21,458 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:12:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:23,058 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ " 41%|████████████████------------------------| 3020/7340 [106:04<151:44, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 17:12:24,472 - agent.ComputerAgent - INFO - Computer: type({'text': '=DATEDIF(DATEVALUE(REGEX(A2;\".*(\\\\d{1,2}/\\\\d{1,2}/\\\\d{4})\";\"$1\"));TODAY();\"y\")'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=DATEDIF(DATEVALUE(REGEX(A2;\".*(\\\\d{1,2}/\\\\d{1,2}/\\\\d{4})\";\"$1\"));TODAY();\"y\")'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:25,172 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 41%|████████████████------------------------| 3020/7340 [106:06<151:47, 28.5 steps/min]\u001b[92m17:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:12:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3021/7340 [106:08<151:44, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:27,217 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 59})\n",
+ "\u001b[92m17:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:27,880 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 226, 'y': 194}, {'x': 342, 'y': 301}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 226, 'y': 194}, {'x': 342, 'y': 301}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3021/7340 [106:10<151:47, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:29,239 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 449})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 449})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:30,558 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3023/7340 [106:13<151:41, 28.5 steps/min]\u001b[92m17:12:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:32,577 - agent.ComputerAgent - INFO - Computer: move({'x': 940, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 940, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:33,937 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://www.target.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://www.target.com'})\n",
+ "\u001b[92m17:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3025/7340 [106:15<151:34, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:34,587 - agent.ComputerAgent - INFO - Computer: click({'x': 239, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 239, 'y': 143})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:12:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:35,870 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:12:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3027/7340 [106:17<151:27, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:12:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:36,518 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 125, 'y': 183}, {'x': 269, 'y': 599}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 125, 'y': 183}, {'x': 269, 'y': 599}]})\n",
+ "\u001b[92m17:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:37,191 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 537, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 537, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3028/7340 [106:19<151:24, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:12:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:38,985 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 534})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 534})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3030/7340 [106:20<151:16, 28.5 steps/min]2025-08-11 17:12:39,652 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:12:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:40,984 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda config --remove-key channels\\nconda config --add channels conda-forge\\nconda config --set channel_priority strict\\nconda create -n hf python=3.11 -y\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda config --remove-key channels\\nconda config --add channels conda-forge\\nconda config --set channel_priority strict\\nconda create -n hf python=3.11 -y\\n'})\n",
+ " 41%|████████████████------------------------| 3031/7340 [106:22<151:13, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:42,161 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3032/7340 [106:23<151:10, 28.5 steps/min]2025-08-11 17:12:42,812 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:44,150 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3032/7340 [106:25<151:13, 28.5 steps/min]2025-08-11 17:12:44,812 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:45,472 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:46,141 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:46,821 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3032/7340 [106:28<151:17, 28.5 steps/min]2025-08-11 17:12:48,987 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3032/7340 [106:32<151:22, 28.5 steps/min]\u001b[92m17:12:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:50,998 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:12:51,677 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:52,992 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m17:12:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:12:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3032/7340 [106:34<151:25, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:12:53,682 - agent.ComputerAgent - INFO - Computer: click({'x': 371, 'y': 229})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 371, 'y': 229})\n",
+ "2025-08-11 17:12:54,327 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:12:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:55,014 - agent.ComputerAgent - INFO - Computer: click({'x': 805, 'y': 371})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 805, 'y': 371})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3033/7340 [106:36<151:23, 28.4 steps/min]2025-08-11 17:12:55,996 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:12:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:12:57,341 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 41%|████████████████------------------------| 3035/7340 [106:39<151:16, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:12:59,213 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda install -n base -y conda-libmamba-solver\\nconda config --set solver libmamba\\nconda create -n hf python=3.11 -y\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda install -n base -y conda-libmamba-solver\\nconda config --set solver libmamba\\nconda create -n hf python=3.11 -y\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3036/7340 [106:40<151:14, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:12:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 41%|████████████████------------------------| 3037/7340 [106:41<151:10, 28.5 steps/min]\u001b[92m17:13:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:01,059 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 512, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 512, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/reset \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3037/7340 [106:42<151:12, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3038/7340 [106:43<151:08, 28.5 steps/min]2025-08-11 17:13:02,929 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:13:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:13:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:03,604 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 175})\n",
+ "2025-08-11 17:13:04,249 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:13:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3038/7340 [106:46<151:12, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:13:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:06,178 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:13:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:13:06,843 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:13:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:13:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:08,207 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3039/7340 [106:50<151:12, 28.4 steps/min]\u001b[92m17:13:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:09,548 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 427})\n",
+ "\u001b[92m17:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:10,178 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:13:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:13:10,839 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 176})\n",
+ "\u001b[92m17:13:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3040/7340 [106:52<151:10, 28.4 steps/min]2025-08-11 17:13:11,514 - agent.ComputerAgent - INFO - Computer: click({'x': 915, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 915, 'y': 202})\n",
+ "2025-08-11 17:13:12,149 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:13:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3042/7340 [106:53<151:02, 28.5 steps/min]2025-08-11 17:13:12,839 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:13:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3043/7340 [106:54<150:58, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dbf6ccac-ccc2-452b-8e44-9445465a9eaa/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3043/7340 [106:57<151:02, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:18,119 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:13:18,120 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]2025-08-11 17:13:20,315 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 41%|████████████████------------------------| 3043/7340 [107:02<151:08, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:21,211 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:13:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:13:21,899 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:13:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:13:22,646 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:13:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3044/7340 [107:04<151:06, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:23,335 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:13:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:24,021 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:13:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 41%|████████████████------------------------| 3044/7340 [107:05<151:08, 28.4 steps/min]\u001b[92m17:13:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:24,683 - agent.ComputerAgent - INFO - Computer: click({'x': 549, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 549, 'y': 275})\n",
+ "\u001b[92m17:13:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:25,315 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 651, 'scroll_x': 0, 'x': 540, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 651, 'scroll_x': 0, 'x': 540, 'y': 333})\n",
+ "\u001b[92m17:13:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 41%|████████████████------------------------| 3044/7340 [107:07<151:10, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:25,965 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 177})\n",
+ " 41%|████████████████------------------------| 3046/7340 [107:08<151:01, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:27,793 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda create -n hf python=3.11 -c conda-forge -y\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda create -n hf python=3.11 -c conda-forge -y\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3047/7340 [107:10<150:59, 28.4 steps/min]\u001b[92m17:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:29,575 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3048/7340 [107:11<150:56, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ecc77db9-1735-4a1a-ab51-39c4517489fb/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3050/7340 [107:12<150:47, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:13:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:32,040 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 143})\n",
+ "2025-08-11 17:13:32,708 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:13:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3050/7340 [107:14<150:50, 28.4 steps/min]2025-08-11 17:13:33,350 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:34,028 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:13:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3051/7340 [107:15<150:47, 28.4 steps/min]2025-08-11 17:13:34,702 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:13:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<150:49, 28.4 steps/min]2025-08-11 17:13:36,046 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:13:36,682 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3051/7340 [107:18<150:50, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3051/7340 [107:21<150:54, 28.4 steps/min]\u001b[92m17:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 17:13:40,393 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:13:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:43,265 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thinking about High-Quality Human Data.pdf'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thinking about High-Quality Human Data.pdf'})\n",
+ " 42%|████████████████------------------------| 3051/7340 [107:24<151:00, 28.4 steps/min]\u001b[92m17:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:43,938 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 600, 'x': 469, 'y': 269})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 600, 'x': 469, 'y': 269})\n",
+ "\u001b[92m17:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:44,577 - agent.ComputerAgent - INFO - Computer: click({'x': 621, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 621, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:45,925 - agent.ComputerAgent - INFO - Agent: I configured LibreOffice Impress to use only one monitor for the slideshow and to disable the Presenter Console.\n",
+ "\n",
+ "What I changed:\n",
+ "- Tools > Options > LibreOffice Impress > General: unchecked “Enable Presenter Console,” then applied and saved.\n",
+ "\n",
+ "How to choose which single monitor shows the slideshow (do this once per machine/setup):\n",
+ "- Slide Show > Slide Show Settings…\n",
+ "- Under Multiple displays, set “Presentation display” to the monitor you want (Display 1 or Display 2) and keep Presentation mode as “Full screen.”\n",
+ "- Click OK.\n",
+ "\n",
+ "Result:\n",
+ "- When you start the slideshow, it will fill only the selected monitor.\n",
+ "- The second monitor will remain free for other work, and the “Current/Next slide” presenter view will no longer appear.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I configured LibreOffice Impress to use only one monitor for the slideshow and to disable the Presenter Console.\n",
+ "\n",
+ "What I changed:\n",
+ "- Tools > Options > LibreOffice Impress > General: unchecked “Enable Presenter Console,” then applied and saved.\n",
+ "\n",
+ "How to choose which single monitor shows the slideshow (do this once per machine/setup):\n",
+ "- Slide Show > Slide Show Settings…\n",
+ "- Under Multiple displays, set “Presentation display” to the monitor you want (Display 1 or Display 2) and keep Presentation mode as “Full screen.”\n",
+ "- Click OK.\n",
+ "\n",
+ "Result:\n",
+ "- When you start the slideshow, it will fill only the selected monitor.\n",
+ "- The second monitor will remain free for other work, and the “Current/Next slide” presenter view will no longer appear.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:13:46,540 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1315\n",
+ " - prompt_tokens: 13574\n",
+ " - total_tokens: 14889\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9472\n",
+ " - response_cost: $0.0195\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1315\n",
+ " - prompt_tokens: 13574\n",
+ " - total_tokens: 14889\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9472\n",
+ " - response_cost: $0.0195\n",
+ " 42%|████████████████------------------------| 3054/7340 [107:28<150:49, 28.4 steps/min]2025-08-11 17:13:47,229 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 218})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 218})\n",
+ "2025-08-11 17:13:47,886 - agent.ComputerAgent - INFO - Computer: double_click({'x': 205, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 205, 'y': 214})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:50,419 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3055/7340 [107:32<150:50, 28.4 steps/min]\u001b[92m17:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:13:51,700 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:13:52,387 - agent.ComputerAgent - INFO - Computer: click({'x': 954, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 954, 'y': 232})\n",
+ "2025-08-11 17:13:53,035 - agent.ComputerAgent - INFO - Computer: click({'x': 589, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 589, 'y': 143})\n",
+ "\u001b[92m17:13:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3057/7340 [107:34<150:43, 28.4 steps/min]2025-08-11 17:13:53,673 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 658, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 658, 'y': 467})\n",
+ " 42%|████████████████------------------------| 3059/7340 [107:35<150:34, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:13:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3060/7340 [107:37<150:31, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:13:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:13:56,551 - agent.ComputerAgent - INFO - Computer: click({'x': 660, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 660, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3060/7340 [107:38<150:33, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1f48e361-2592-41ee-8818-d6e9174fe800/close \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3063/7340 [107:39<150:19, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:13:58,853 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3063/7340 [107:40<150:21, 28.4 steps/min]2025-08-11 17:13:59,510 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:13:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:00,189 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:14:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:00,831 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:14:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3063/7340 [107:42<150:23, 28.4 steps/min]2025-08-11 17:14:01,477 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:14:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:02,169 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:14:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3063/7340 [107:43<150:25, 28.4 steps/min]2025-08-11 17:14:02,841 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:14:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:03,496 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3063/7340 [107:45<150:27, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:14:06,224 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda create -n hf python=3.11 --override-channels -c conda-forge -y\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda create -n hf python=3.11 --override-channels -c conda-forge -y\\n'})\n",
+ " 42%|████████████████------------------------| 3064/7340 [107:53<150:33, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 17:14:12,505 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:14:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3064/7340 [107:55<150:36, 28.4 steps/min]\u001b[92m17:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:14:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:14:15,999 - agent.ComputerAgent - INFO - Computer: type({'text': 'spider-man toys kids'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'spider-man toys kids'})\n",
+ " 42%|████████████████------------------------| 3064/7340 [107:57<150:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:14:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:14:17,286 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 512, 'y': 384})\n",
+ "\u001b[92m17:14:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:14:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:14:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:14:17,916 - agent.ComputerAgent - INFO - Computer: click({'x': 175, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 175, 'y': 183})\n",
+ " 42%|████████████████------------------------| 3065/7340 [107:59<150:37, 28.4 steps/min]2025-08-11 17:14:18,581 - agent.ComputerAgent - INFO - Computer: click({'x': 730, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 730, 'y': 275})\n",
+ "2025-08-11 17:14:19,257 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 182})\n",
+ "\u001b[92m17:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:14:19,937 - agent.ComputerAgent - INFO - Computer: click({'x': 184, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 184, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3067/7340 [108:02<150:31, 28.4 steps/min]2025-08-11 17:14:21,216 - agent.ComputerAgent - INFO - Computer: double_click({'x': 757, 'y': 644})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 757, 'y': 644})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3070/7340 [108:04<150:19, 28.4 steps/min]\u001b[92m17:14:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:14:23,849 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 59})\n",
+ "\u001b[92m17:14:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:14:25,159 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:14:25,792 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 254})\n",
+ "\u001b[92m17:14:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/reset \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3071/7340 [108:07<150:18, 28.4 steps/min]2025-08-11 17:14:26,480 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 178})\n",
+ "2025-08-11 17:14:27,163 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 654, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 654, 'y': 467})\n",
+ " 42%|████████████████------------------------| 3076/7340 [108:09<149:56, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:11<149:49, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b6fc8c3-534a-4e7d-9a9b-4c6bad0e0619/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:13<149:50, 28.4 steps/min]2025-08-11 17:14:32,090 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:14:32,771 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:14<149:52, 28.4 steps/min]2025-08-11 17:14:33,404 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:14:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:34,070 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:34,696 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:35,372 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:14:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:36,032 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:14:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:17<149:57, 28.4 steps/min]2025-08-11 17:14:37,022 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:14:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:37,680 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:14:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:19<149:59, 28.4 steps/min]2025-08-11 17:14:38,337 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:14:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:39,001 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:39,701 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:21<150:02, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it] 28.4 steps/min]2025-08-11 17:14:42,801 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:14:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:24<150:06, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:14:43,842 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:14:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:25<150:08, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it] 28.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:14:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:28<150:12, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 42%|████████████████------------------------| 3078/7340 [108:29<150:14, 28.4 steps/min]\u001b[92m17:14:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:14:48,883 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:14:48,885 - agent.ComputerAgent - INFO - Computer: click({'x': 314, 'y': 121})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 314, 'y': 121})\n",
+ "\u001b[92m17:14:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:14:49,521 - agent.ComputerAgent - INFO - Computer: double_click({'x': 193, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 193, 'y': 178})\n",
+ " 42%|████████████████------------------------| 3080/7340 [108:32<150:07, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3080/7340 [108:33<150:08, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3080/7340 [108:34<150:09, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:14:53,931 - agent.ComputerAgent - INFO - Computer: type({'text': 'Paper Recommendation'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Paper Recommendation'})\n",
+ " 42%|████████████████------------------------| 3080/7340 [108:35<150:11, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:14:55,591 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:14:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3081/7340 [108:37<150:09, 28.4 steps/min]2025-08-11 17:14:56,259 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:14:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:14:57,587 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3081/7340 [108:39<150:12, 28.4 steps/min]\u001b[92m17:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:14:59,416 - agent.ComputerAgent - INFO - Computer: click({'x': 107, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 107, 'y': 33})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:15:00,701 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3082/7340 [108:43<150:12, 28.3 steps/min]\u001b[92m17:15:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:15:02,638 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 42%|████████████████------------------------| 3084/7340 [108:45<150:04, 28.4 steps/min]\u001b[92m17:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:04,289 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:15:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:04,939 - agent.ComputerAgent - INFO - Computer: click({'x': 572, 'y': 551})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 572, 'y': 551})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3084/7340 [108:47<150:07, 28.3 steps/min]\u001b[92m17:15:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:15:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:06,301 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:15:07,327 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:15:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:15:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:08,631 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:15:08,632 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3085/7340 [108:51<150:07, 28.3 steps/min]2025-08-11 17:15:09,958 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 207})\n",
+ "\u001b[92m17:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:11,270 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 147, 'y': 581}, {'x': 147, 'y': 678}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 147, 'y': 581}, {'x': 147, 'y': 678}]})\n",
+ " 42%|████████████████------------------------| 3086/7340 [108:52<150:05, 28.3 steps/min]\u001b[92m17:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:11,906 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 203})\n",
+ "\u001b[92m17:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:12,582 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 243})\n",
+ " 42%|████████████████------------------------| 3090/7340 [108:55<149:48, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3090/7340 [108:56<149:50, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:15,903 - agent.ComputerAgent - INFO - Computer: click({'x': 894, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 894, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3090/7340 [108:57<149:51, 28.4 steps/min]2025-08-11 17:15:16,561 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:15:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:15:17,873 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda activate hf\\nconda install -y -c conda-forge datasets\\npython -c \"import datasets, sys; print(\\'datasets version:\\', datasets.__version__)\"\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda activate hf\\nconda install -y -c conda-forge datasets\\npython -c \"import datasets, sys; print(\\'datasets version:\\', datasets.__version__)\"\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:19,817 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3091/7340 [109:01<149:52, 28.4 steps/min]2025-08-11 17:15:20,461 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:15:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:21,839 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:22,509 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 42%|████████████████------------------------| 3092/7340 [109:04<149:50, 28.3 steps/min]\u001b[92m17:15:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:15:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:23,191 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:23,827 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3093/7340 [109:05<149:47, 28.4 steps/min]2025-08-11 17:15:24,498 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 194, 'y': 182}, {'x': 183, 'y': 294}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 194, 'y': 182}, {'x': 183, 'y': 294}]})\n",
+ "2025-08-11 17:15:25,826 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:15:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3093/7340 [109:08<149:51, 28.3 steps/min]\u001b[92m17:15:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:27,861 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m17:15:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3094/7340 [109:09<149:48, 28.3 steps/min]2025-08-11 17:15:28,558 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 175})\n",
+ "2025-08-11 17:15:29,222 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3095/7340 [109:11<149:45, 28.3 steps/min]2025-08-11 17:15:29,891 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:15:30,530 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:15:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:12<149:41, 28.4 steps/min]2025-08-11 17:15:31,171 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:15:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:13<149:43, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:14<149:44, 28.3 steps/min]2025-08-11 17:15:32,801 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:15:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:15:33,431 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:15:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:15<149:45, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:16<149:47, 28.3 steps/min]\u001b[92m17:15:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:35,248 - agent.ComputerAgent - INFO - Computer: click({'x': 804, 'y': 654})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 804, 'y': 654})\n",
+ "2025-08-11 17:15:35,931 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:15:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3096/7340 [109:17<149:49, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:15:37,102 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:15:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3097/7340 [109:18<149:45, 28.3 steps/min]2025-08-11 17:15:37,759 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:15:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3097/7340 [109:22<149:51, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:15:42,613 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ " 42%|████████████████------------------------| 3097/7340 [109:24<149:53, 28.3 steps/min]2025-08-11 17:15:43,243 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:15:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:15:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:44,564 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:15:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3097/7340 [109:26<149:56, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:45,740 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 279})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 279})\n",
+ " 42%|████████████████------------------------| 3097/7340 [109:27<149:57, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3098/7340 [109:28<149:54, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:48,097 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:50,080 - agent.ComputerAgent - INFO - Computer: type({'text': 'conda install -y -c conda-forge --override-channels datasets\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'conda install -y -c conda-forge --override-channels datasets\\n'})\n",
+ " 42%|████████████████------------------------| 3098/7340 [109:31<149:58, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:51,375 - agent.ComputerAgent - INFO - Computer: type({'text': 'python --version\\npython3 --version\\nls /usr/bin/python* | head -n 20\\napt-cache policy python4 || apt-cache search python4 | head\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'python --version\\npython3 --version\\nls /usr/bin/python* | head -n 20\\napt-cache policy python4 || apt-cache search python4 | head\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3100/7340 [109:34<149:52, 28.3 steps/min]2025-08-11 17:15:53,291 - agent.ComputerAgent - INFO - Computer: move({'x': 914, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 914, 'y': 232})\n",
+ "\u001b[92m17:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:53,963 - agent.ComputerAgent - INFO - Computer: click({'x': 935, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 935, 'y': 351})\n",
+ "\u001b[92m17:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:15:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3101/7340 [109:36<149:49, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:55,270 - agent.ComputerAgent - INFO - Computer: click({'x': 225, 'y': 520})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 225, 'y': 520})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3103/7340 [109:37<149:41, 28.3 steps/min]\u001b[92m17:15:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:15:56,620 - agent.ComputerAgent - INFO - Computer: click({'x': 235, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 235, 'y': 206})\n",
+ "\u001b[92m17:15:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:15:57,299 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 840, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 840, 'y': 467})\n",
+ " 42%|████████████████------------------------| 3104/7340 [109:39<149:38, 28.3 steps/min]2025-08-11 17:15:57,924 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:15:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:15:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3106/7340 [109:40<149:30, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:00,272 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 193, 'y': 180}, {'x': 184, 'y': 293}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 193, 'y': 180}, {'x': 184, 'y': 293}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3106/7340 [109:42<149:33, 28.3 steps/min]\u001b[92m17:16:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:02,072 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3107/7340 [109:43<149:29, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:02,765 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:16:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:03,441 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:16:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:04,480 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:16:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3108/7340 [109:46<149:28, 28.3 steps/min]2025-08-11 17:16:05,143 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:16:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:05,823 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:16:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3108/7340 [109:47<149:29, 28.3 steps/min]2025-08-11 17:16:06,914 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:16:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3108/7340 [109:48<149:31, 28.3 steps/min]2025-08-11 17:16:07,563 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:16:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:08,252 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:16:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3108/7340 [109:50<149:34, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:10,625 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:16:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:16:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3108/7340 [109:52<149:36, 28.3 steps/min]2025-08-11 17:16:11,298 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 249})\n",
+ "\u001b[92m17:16:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:11,982 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:16:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:12,642 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 91})\n",
+ " 42%|████████████████------------------------| 3110/7340 [109:55<149:30, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:15,007 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 42%|████████████████------------------------| 3111/7340 [109:57<149:28, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3111/7340 [109:59<149:30, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:18,043 - agent.ComputerAgent - INFO - Computer: click({'x': 413, 'y': 587})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 413, 'y': 587})\n",
+ "\u001b[92m17:16:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:18,709 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3111/7340 [110:00<149:32, 28.3 steps/min]2025-08-11 17:16:19,335 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:16:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:20,773 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:16:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3113/7340 [110:03<149:26, 28.3 steps/min]\u001b[92m17:16:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:22,077 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:16:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:16:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:23,083 - agent.ComputerAgent - INFO - Computer: click({'x': 839, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 839, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:16:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/730002fc-5760-41b0-97b8-f6783353a242/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3113/7340 [110:06<149:30, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:16:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:16:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:25,728 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 244})\n",
+ "\u001b[92m17:16:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3114/7340 [110:07<149:26, 28.3 steps/min]2025-08-11 17:16:26,375 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:16:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:27,003 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 404})\n",
+ "\u001b[92m17:16:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3115/7340 [110:08<149:23, 28.3 steps/min]2025-08-11 17:16:28,017 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 203})\n",
+ "2025-08-11 17:16:28,682 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:16:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3116/7340 [110:10<149:21, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:30,033 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3117/7340 [110:12<149:18, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:31,839 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 181})\n",
+ " 42%|████████████████------------------------| 3118/7340 [110:13<149:15, 28.3 steps/min]2025-08-11 17:16:32,503 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:16:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 42%|████████████████------------------------| 3119/7340 [110:14<149:11, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3119/7340 [110:15<149:12, 28.3 steps/min]2025-08-11 17:16:34,177 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:16:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:36,680 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ " 42%|████████████████------------------------| 3119/7340 [110:18<149:16, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:37,927 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:38,544 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:16:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:16:39,208 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 202})\n",
+ "\u001b[92m17:16:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 42%|████████████████------------------------| 3119/7340 [110:21<149:21, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:40,538 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 600, 'x': 422, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 600, 'x': 422, 'y': 249})\n",
+ "2025-08-11 17:16:41,185 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:16:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 43%|█████████████████-----------------------| 3121/7340 [110:23<149:14, 28.3 steps/min]\u001b[92m17:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:42,861 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:44,179 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 184})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:16:44,861 - agent.ComputerAgent - INFO - Computer: click({'x': 185, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 185, 'y': 177})\n",
+ " 43%|█████████████████-----------------------| 3121/7340 [110:26<149:17, 28.3 steps/min]\u001b[92m17:16:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:45,524 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:16:45,525 - agent.ComputerAgent - INFO - Computer: click({'x': 345, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 345, 'y': 202})\n",
+ "2025-08-11 17:16:46,155 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:16:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3123/7340 [110:27<149:09, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:16:48,493 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m17:16:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6bacb467-6eb5-4ead-ac71-a185d2fa5e80/close \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3124/7340 [110:30<149:07, 28.3 steps/min]2025-08-11 17:16:49,152 - agent.ComputerAgent - INFO - Computer: click({'x': 964, 'y': 734})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 964, 'y': 734})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3125/7340 [110:31<149:04, 28.3 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:16:50,459 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m17:16:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3126/7340 [110:32<149:01, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:16:52,287 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3126/7340 [110:34<149:02, 28.3 steps/min]2025-08-11 17:16:52,944 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]2025-08-11 17:16:53,805 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3126/7340 [110:35<149:05, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]2025-08-11 17:16:56,038 - agent.ComputerAgent - INFO - Computer: type({'text': 'python --version || true\\npython3 --version\\napt-cache policy python4\\napt-cache search ^python4$\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'python --version || true\\npython3 --version\\napt-cache policy python4\\napt-cache search ^python4$\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it] 28.3 steps/min]2025-08-11 17:16:56,926 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:16:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 17:16:57,613 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:16:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3128/7340 [110:39<149:00, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:16:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3128/7340 [110:40<149:01, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:16:59,390 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:16:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3128/7340 [110:41<149:02, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:00,054 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m17:17:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:01,584 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3128/7340 [110:43<149:06, 28.2 steps/min]\u001b[92m17:17:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:02,902 - agent.ComputerAgent - INFO - Computer: click({'x': 835, 'y': 640})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 835, 'y': 640})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:17:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:03,576 - agent.ComputerAgent - INFO - Computer: click({'x': 491, 'y': 367})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 491, 'y': 367})\n",
+ " 43%|█████████████████-----------------------| 3130/7340 [110:45<148:58, 28.3 steps/min]\u001b[92m17:17:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:04,287 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 203})\n",
+ "2025-08-11 17:17:04,933 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:17:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3132/7340 [110:46<148:50, 28.3 steps/min]2025-08-11 17:17:05,574 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:17:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:06,884 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3133/7340 [110:49<148:48, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:08,195 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m17:17:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:08,853 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:17:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3133/7340 [110:50<148:50, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:09,997 - agent.ComputerAgent - INFO - Computer: click({'x': 347, 'y': 222})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 347, 'y': 222})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3133/7340 [110:51<148:51, 28.3 steps/min]2025-08-11 17:17:10,673 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:17:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:12,792 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 43%|█████████████████-----------------------| 3135/7340 [110:54<148:45, 28.3 steps/min]2025-08-11 17:17:13,449 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:17:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:14,146 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:17:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3136/7340 [110:56<148:43, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:15,529 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:17:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 43%|█████████████████-----------------------| 3136/7340 [110:57<148:44, 28.3 steps/min]\u001b[92m17:17:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:16,706 - agent.ComputerAgent - INFO - Computer: click({'x': 565, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 565, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3136/7340 [110:59<148:46, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:18,039 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m17:17:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:19,330 - agent.ComputerAgent - INFO - Agent: The United Airlines Checked Bag Fee Calculator is open and ready to use. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The United Airlines Checked Bag Fee Calculator is open and ready to use. Task completed.\n",
+ "2025-08-11 17:17:20,001 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 6013\n",
+ " - total_tokens: 6231\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 218\n",
+ " - prompt_tokens: 6013\n",
+ " - total_tokens: 6231\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:21,358 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 43%|█████████████████-----------------------| 3138/7340 [111:03<148:42, 28.3 steps/min]2025-08-11 17:17:22,012 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:17:22,012 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:17:22,679 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3139/7340 [111:04<148:39, 28.3 steps/min]2025-08-11 17:17:23,341 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:17:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:24,654 - agent.ComputerAgent - INFO - Computer: type({'text': 'python -c \"import datasets, sys; print(\\'datasets version:\\', datasets.__version__)\"\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'python -c \"import datasets, sys; print(\\'datasets version:\\', datasets.__version__)\"\\n'})\n",
+ " 43%|█████████████████-----------------------| 3140/7340 [111:06<148:36, 28.3 steps/min]2025-08-11 17:17:25,261 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3141/7340 [111:07<148:33, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:26,469 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m17:17:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3141/7340 [111:08<148:34, 28.3 steps/min]2025-08-11 17:17:27,141 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:17:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:28,488 - agent.ComputerAgent - INFO - Computer: type({'text': \"which python4 || command -v python4 || ls /usr/bin/python4* 2>/dev/null || echo 'no python4 found'\\n\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"which python4 || command -v python4 || ls /usr/bin/python4* 2>/dev/null || echo 'no python4 found'\\n\"})\n",
+ " 43%|█████████████████-----------------------| 3141/7340 [111:10<148:37, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3143/7340 [111:11<148:28, 28.3 steps/min]2025-08-11 17:17:30,119 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:17:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:32,137 - agent.ComputerAgent - INFO - Computer: type({'text': '=DATEDIF(DATEVALUE(REGEX(A2;\".* \";\"\"));TODAY();\"y\")'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=DATEDIF(DATEVALUE(REGEX(A2;\".* \";\"\"));TODAY();\"y\")'})\n",
+ "2025-08-11 17:17:32,777 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m17:17:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:34,149 - agent.ComputerAgent - INFO - Computer: type({'text': 'LLM Powered Autonomous Agents.pdf'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'LLM Powered Autonomous Agents.pdf'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3143/7340 [111:17<148:36, 28.2 steps/min]\u001b[92m17:17:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:17:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:36,722 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 368})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 368})\n",
+ "\u001b[92m17:17:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:17:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3145/7340 [111:18<148:28, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:37,353 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 201})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 201})\n",
+ "2025-08-11 17:17:37,994 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 706, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 706, 'y': 659})\n",
+ "\u001b[92m17:17:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 43%|█████████████████-----------------------| 3147/7340 [111:19<148:19, 28.3 steps/min]\u001b[92m17:17:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:38,649 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:17:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:17:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:17:40,658 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 17:17:41,329 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 194, 'y': 183}, {'x': 184, 'y': 291}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 194, 'y': 183}, {'x': 184, 'y': 291}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3149/7340 [111:23<148:14, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:17:41,970 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m17:17:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:42,600 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3150/7340 [111:24<148:11, 28.3 steps/min]2025-08-11 17:17:43,277 - agent.ComputerAgent - INFO - Computer: click({'x': 850, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 850, 'y': 202})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:43,930 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:17:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3150/7340 [111:25<148:13, 28.3 steps/min]2025-08-11 17:17:44,570 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:17:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:45,261 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:45,939 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/091ec079-295e-4528-bad5-f34604d013c2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3151/7340 [111:27<148:10, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 43%|█████████████████-----------------------| 3152/7340 [111:31<148:11, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:51,370 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:17:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3152/7340 [111:33<148:13, 28.3 steps/min]\u001b[92m17:17:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:17:52,759 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m17:17:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3152/7340 [111:34<148:14, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:53,422 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:17:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3152/7340 [111:36<148:18, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3153/7340 [111:37<148:14, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]2025-08-11 17:17:57,266 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:17:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3153/7340 [111:39<148:15, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:17:57,933 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m17:17:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it] 28.2 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 43%|█████████████████-----------------------| 3154/7340 [111:43<148:16, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3167/7340 [111:44<147:14, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:03,401 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m17:18:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:04,060 - agent.ComputerAgent - INFO - Computer: click({'x': 666, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 666, 'y': 219})\n",
+ "\u001b[92m17:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/094ee49d-29b5-4911-bfc8-7d0e73a55c44/close \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3167/7340 [111:45<147:15, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:04,733 - agent.ComputerAgent - INFO - Computer: click({'x': 442, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 442, 'y': 162})\n",
+ "2025-08-11 17:18:05,375 - agent.ComputerAgent - INFO - Computer: click({'x': 811, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 811, 'y': 336})\n",
+ "2025-08-11 17:18:06,055 - agent.ComputerAgent - INFO - Computer: double_click({'x': 347, 'y': 222})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 347, 'y': 222})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:18:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3169/7340 [111:48<147:09, 28.3 steps/min]2025-08-11 17:18:07,401 - agent.ComputerAgent - INFO - Computer: click({'x': 536, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 536, 'y': 276})\n",
+ "2025-08-11 17:18:08,031 - agent.ComputerAgent - INFO - Computer: double_click({'x': 489, 'y': 368})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 489, 'y': 368})\n",
+ " 43%|█████████████████-----------------------| 3172/7340 [111:49<146:56, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3174/7340 [111:50<146:48, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:09,694 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m17:18:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3174/7340 [111:52<146:50, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3175/7340 [111:53<146:47, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:13,931 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3175/7340 [111:56<146:51, 28.4 steps/min]2025-08-11 17:18:15,929 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m17:18:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:16,592 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:18:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/reset \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3175/7340 [111:58<146:53, 28.4 steps/min]2025-08-11 17:18:17,271 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:18:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:17,933 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:18:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3175/7340 [111:59<146:55, 28.3 steps/min]2025-08-11 17:18:18,584 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:18:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:18:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3175/7340 [112:01<146:56, 28.3 steps/min]2025-08-11 17:18:19,930 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:18:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 43%|█████████████████-----------------------| 3176/7340 [112:03<146:54, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 17:18:22,230 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m17:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:22,888 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3176/7340 [112:04<146:56, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:18:24,301 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:18:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:25,762 - agent.ComputerAgent - INFO - Computer: type({'text': '=DATEDIF(DATEVALUE(TRIM(RIGHT(SUBSTITUTE(A2,\" \",REPT(\" \",50)),50))),TODAY(),\"Y\")'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=DATEDIF(DATEVALUE(TRIM(RIGHT(SUBSTITUTE(A2,\" \",REPT(\" \",50)),50))),TODAY(),\"Y\")'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3176/7340 [112:07<147:00, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:26,382 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:27,061 - agent.ComputerAgent - INFO - Computer: click({'x': 884, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 884, 'y': 202})\n",
+ " 43%|█████████████████-----------------------| 3178/7340 [112:08<146:52, 28.3 steps/min]\u001b[92m17:18:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:27,739 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -614, 'scroll_x': 0, 'x': 377, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -614, 'scroll_x': 0, 'x': 377, 'y': 158})\n",
+ "\u001b[92m17:18:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/025be48d-d757-4973-8c17-e42b8f6814b0/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:18:29,052 - agent.ComputerAgent - INFO - Agent: I attempted to set the default Python to “Python4,” but this is not possible.\n",
+ "\n",
+ "What I verified on your Ubuntu system:\n",
+ "- python3 is installed (Python 3.x is the current major version).\n",
+ "- There is no python4 binary on the system (which python4 → not found).\n",
+ "- apt can’t find a package named python4 (apt-cache policy/search → Unable to locate package python4).\n",
+ "\n",
+ "Reason:\n",
+ "- Python 4 has not been released and no official packages/binaries exist, so Ubuntu cannot install or switch to it.\n",
+ "\n",
+ "Options I can perform instead (tell me which you want):\n",
+ "- Set the default python command to point to python3 (via python-is-python3 or update-alternatives) or to a specific installed Python 3 version.\n",
+ "- Install additional Python 3 versions (e.g., 3.11/3.12) from Ubuntu or deadsnakes PPA and configure update-alternatives to choose a default.\n",
+ "- Install pyenv to manage multiple Python versions per user.\n",
+ "- Create a dummy “python4” alias to python3 for compatibility scripts (not recommended, as it’s misleading).\n",
+ "\n",
+ "This task cannot be completed because Python4 does not exist for Ubuntu at this time.\n",
+ "INFO:agent.ComputerAgent:Agent: I attempted to set the default Python to “Python4,” but this is not possible.\n",
+ "\n",
+ "What I verified on your Ubuntu system:\n",
+ "- python3 is installed (Python 3.x is the current major version).\n",
+ "- There is no python4 binary on the system (which python4 → not found).\n",
+ "- apt can’t find a package named python4 (apt-cache policy/search → Unable to locate package python4).\n",
+ "\n",
+ "Reason:\n",
+ "- Python 4 has not been released and no official packages/binaries exist, so Ubuntu cannot install or switch to it.\n",
+ "\n",
+ "Options I can perform instead (tell me which you want):\n",
+ "- Set the default python command to point to python3 (via python-is-python3 or update-alternatives) or to a specific installed Python 3 version.\n",
+ "- Install additional Python 3 versions (e.g., 3.11/3.12) from Ubuntu or deadsnakes PPA and configure update-alternatives to choose a default.\n",
+ "- Install pyenv to manage multiple Python versions per user.\n",
+ "- Create a dummy “python4” alias to python3 for compatibility scripts (not recommended, as it’s misleading).\n",
+ "\n",
+ "This task cannot be completed because Python4 does not exist for Ubuntu at this time.\n",
+ "2025-08-11 17:18:29,749 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1665\n",
+ " - prompt_tokens: 6754\n",
+ " - total_tokens: 8419\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0251\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1665\n",
+ " - prompt_tokens: 6754\n",
+ " - total_tokens: 8419\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0251\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:30,402 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 136, 'y': 180}, {'x': 141, 'y': 243}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 136, 'y': 180}, {'x': 141, 'y': 243}]})\n",
+ "2025-08-11 17:18:31,022 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m17:18:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3180/7340 [112:12<146:47, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:32,337 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:18:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3182/7340 [112:14<146:39, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<146:36, 28.4 steps/min]2025-08-11 17:18:34,351 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:18:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it] 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:36,056 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m17:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3183/7340 [112:18<146:40, 28.3 steps/min]\u001b[92m17:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]2025-08-11 17:18:38,147 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it] 28.3 steps/min]2025-08-11 17:18:38,912 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:18:39,590 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:18:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 43%|█████████████████-----------------------| 3183/7340 [112:21<146:44, 28.3 steps/min]2025-08-11 17:18:40,378 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:18:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:43,100 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m17:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:44,441 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:18:44,443 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3184/7340 [112:26<146:46, 28.3 steps/min]\u001b[92m17:18:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:18:45,794 - agent.ComputerAgent - INFO - Computer: click({'x': 855, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 855, 'y': 336})\n",
+ "2025-08-11 17:18:46,467 - agent.ComputerAgent - INFO - Computer: click({'x': 347, 'y': 222})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 347, 'y': 222})\n",
+ "2025-08-11 17:18:47,141 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:18:47,143 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 384})\n",
+ "\u001b[92m17:18:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:18:47,792 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:18:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:48,442 - agent.ComputerAgent - INFO - Computer: double_click({'x': 386, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 386, 'y': 91})\n",
+ "\u001b[92m17:18:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 43%|█████████████████-----------------------| 3188/7340 [112:30<146:31, 28.3 steps/min]2025-08-11 17:18:49,083 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:49,773 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:50,427 - agent.ComputerAgent - INFO - Computer: click({'x': 854, 'y': 222})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 854, 'y': 222})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:18:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 43%|█████████████████-----------------------| 3192/7340 [112:33<146:15, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:18:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:18:52,246 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:18:52,248 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 53})\n",
+ " 44%|█████████████████-----------------------| 3193/7340 [112:34<146:11, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afb4e623-39bf-4f23-ac18-6c4a71f53c62/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:53,614 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m17:18:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3194/7340 [112:35<146:08, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3194/7340 [112:37<146:11, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:56,793 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:18:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:38<146:08, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:57,480 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:18:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:58,172 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:18:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:18:58,842 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:18:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:40<146:10, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:18:59,502 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:19:00,122 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:19:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:41<146:12, 28.4 steps/min]2025-08-11 17:19:00,781 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:19:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:42<146:13, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<146:15, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6a6179f5-13f9-4283-a0d1-aaafd881b00a/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:45<146:17, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:47<146:19, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:19:08,375 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:50<146:23, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:10,290 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo apt-get update -y && sudo apt-get install -y python-is-python3\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo apt-get update -y && sudo apt-get install -y python-is-python3\\n'})\n",
+ "2025-08-11 17:19:10,961 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3195/7340 [112:53<146:27, 28.3 steps/min]\u001b[92m17:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:19:12,309 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:19:12,311 - agent.ComputerAgent - INFO - Computer: click({'x': 683, 'y': 516})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 683, 'y': 516})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3196/7340 [112:54<146:24, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3197/7340 [112:57<146:23, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it]\u001b[92m17:19:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3197/7340 [112:59<146:25, 28.3 steps/min]\u001b[92m17:19:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.55s/it]2025-08-11 17:19:18,643 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it] 28.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3197/7340 [113:02<146:29, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:21,188 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:21,844 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ " 44%|█████████████████-----------------------| 3197/7340 [113:03<146:30, 28.3 steps/min]\u001b[92m17:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:22,557 - agent.ComputerAgent - INFO - Computer: click({'x': 957, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 957, 'y': 732})\n",
+ "\u001b[92m17:19:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:23,218 - agent.ComputerAgent - INFO - Computer: click({'x': 870, 'y': 97})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 870, 'y': 97})\n",
+ " 44%|█████████████████-----------------------| 3198/7340 [113:04<146:27, 28.3 steps/min]\u001b[92m17:19:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:23,886 - agent.ComputerAgent - INFO - Computer: click({'x': 823, 'y': 337})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 823, 'y': 337})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:19:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:19:25,175 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ "2025-08-11 17:19:25,794 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 18})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 18})\n",
+ " 44%|█████████████████-----------------------| 3200/7340 [113:07<146:21, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:27,129 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 181})\n",
+ "\u001b[92m17:19:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3203/7340 [113:09<146:09, 28.3 steps/min]\u001b[92m17:19:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:19:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:29,084 - agent.ComputerAgent - INFO - Computer: click({'x': 204, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 204, 'y': 124})\n",
+ "\u001b[92m17:19:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3204/7340 [113:10<146:06, 28.3 steps/min]2025-08-11 17:19:29,755 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 185, 'y': 177}, {'x': 184, 'y': 293}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 185, 'y': 177}, {'x': 184, 'y': 293}]})\n",
+ "\u001b[92m17:19:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:31,062 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 372, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 372, 'button': 'left'})\n",
+ "2025-08-11 17:19:31,752 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 60})\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:14<145:51, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:15<145:52, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:34,404 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:19:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:35,045 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:16<145:54, 28.3 steps/min]2025-08-11 17:19:35,743 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:36,422 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:19:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:18<145:57, 28.3 steps/min]\u001b[92m17:19:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:19:37,789 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:19:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:38,443 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:19:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:19:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:20<145:58, 28.3 steps/min]2025-08-11 17:19:39,510 - agent.ComputerAgent - INFO - Computer: click({'x': 322, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 322, 'y': 169})\n",
+ "2025-08-11 17:19:40,172 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:19:40,842 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3208/7340 [113:22<146:02, 28.3 steps/min]2025-08-11 17:19:41,520 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:19:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:19:42,193 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:19:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3209/7340 [113:24<145:58, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:43,403 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:19:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3209/7340 [113:25<146:00, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3213/7340 [113:26<145:42, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa0172ad-f4a9-4f1a-9e06-2d510775dbd0/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3213/7340 [113:27<145:43, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:19:46,292 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 75})\n",
+ " 44%|█████████████████-----------------------| 3213/7340 [113:28<145:45, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:47,996 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:19:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3214/7340 [113:29<145:42, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:49,340 - agent.ComputerAgent - INFO - Computer: type({'text': 'ACL 2022 Best Long Paper'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ACL 2022 Best Long Paper'})\n",
+ " 44%|█████████████████-----------------------| 3215/7340 [113:32<145:40, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<145:41, 28.3 steps/min]2025-08-11 17:19:51,723 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:19:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.62s/it]2025-08-11 17:19:53,114 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:19:53,116 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3215/7340 [113:35<145:44, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:19:54,548 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:19:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:19:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3216/7340 [113:36<145:41, 28.3 steps/min]2025-08-11 17:19:56,086 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:19:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:19:57,527 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:59,005 - agent.ComputerAgent - INFO - Agent: I’m blocked by Google’s 2‑Step Verification on your account (it requires a code from your Google Authenticator app). I can’t access Drive to create the Google Doc or reach the guidebook without this verification.\n",
+ "\n",
+ "Please complete the 2FA prompt in the open browser window (or provide a valid one-time code). As soon as access is granted, I will:\n",
+ "- Open Google Drive and navigate to /environment_policy.\n",
+ "- Create a new Google Doc named “environment_policy_report (draft)”.\n",
+ "- Locate the Green Economy guidebook, extract the exact Introduction section, and paste it into the doc with clean formatting.\n",
+ "- Save it in the /environment_policy folder.\n",
+ "\n",
+ "If the guidebook isn’t in Drive, share a link or upload it and I’ll proceed immediately.\n",
+ "INFO:agent.ComputerAgent:Agent: I’m blocked by Google’s 2‑Step Verification on your account (it requires a code from your Google Authenticator app). I can’t access Drive to create the Google Doc or reach the guidebook without this verification.\n",
+ "\n",
+ "Please complete the 2FA prompt in the open browser window (or provide a valid one-time code). As soon as access is granted, I will:\n",
+ "- Open Google Drive and navigate to /environment_policy.\n",
+ "- Create a new Google Doc named “environment_policy_report (draft)”.\n",
+ "- Locate the Green Economy guidebook, extract the exact Introduction section, and paste it into the doc with clean formatting.\n",
+ "- Save it in the /environment_policy folder.\n",
+ "\n",
+ "If the guidebook isn’t in Drive, share a link or upload it and I’ll proceed immediately.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:19:59,657 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1958\n",
+ " - prompt_tokens: 2697\n",
+ " - total_tokens: 4655\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1792\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0230\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1958\n",
+ " - prompt_tokens: 2697\n",
+ " - total_tokens: 4655\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1792\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0230\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3217/7340 [113:42<145:43, 28.3 steps/min]\u001b[92m17:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:20:01,020 - agent.ComputerAgent - INFO - Computer: click({'x': 451, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 451, 'y': 162})\n",
+ "2025-08-11 17:20:01,675 - agent.ComputerAgent - INFO - Computer: click({'x': 820, 'y': 338})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 820, 'y': 338})\n",
+ "2025-08-11 17:20:02,334 - agent.ComputerAgent - INFO - Computer: double_click({'x': 386, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 386, 'y': 91})\n",
+ "\u001b[92m17:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3217/7340 [113:44<145:45, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:02,975 - agent.ComputerAgent - INFO - Computer: click({'x': 124, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 124, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:20:05,643 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 44%|█████████████████-----------------------| 3220/7340 [113:47<145:35, 28.3 steps/min]\u001b[92m17:20:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:06,967 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:20:06,968 - agent.ComputerAgent - INFO - Computer: double_click({'x': 17, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 17, 'y': 477})\n",
+ "2025-08-11 17:20:07,637 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 280})\n",
+ " 44%|█████████████████-----------------------| 3222/7340 [113:49<145:28, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:20:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:08,811 - agent.ComputerAgent - INFO - Computer: click({'x': 946, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 946, 'y': 95})\n",
+ " 44%|█████████████████-----------------------| 3224/7340 [113:50<145:20, 28.3 steps/min]2025-08-11 17:20:09,463 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:20:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:10,854 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://huggingface.co/papers/date/2024-03-01'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://huggingface.co/papers/date/2024-03-01'})\n",
+ "2025-08-11 17:20:11,473 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:20:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3225/7340 [113:53<145:18, 28.3 steps/min]2025-08-11 17:20:12,155 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:20:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [113:54<144:51, 28.4 steps/min]2025-08-11 17:20:13,333 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:20:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/085317e9-3b47-437e-8528-0a0fc0e6e688/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [113:55<144:53, 28.4 steps/min]2025-08-11 17:20:14,619 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:20:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:15,296 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [113:57<144:54, 28.4 steps/min]2025-08-11 17:20:15,983 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [113:58<144:56, 28.4 steps/min]2025-08-11 17:20:16,655 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:20:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [113:59<144:57, 28.3 steps/min]2025-08-11 17:20:17,949 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [114:01<145:00, 28.3 steps/min]\u001b[92m17:20:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:19,945 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:20:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]\u001b[92m17:20:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3231/7340 [114:02<145:02, 28.3 steps/min]2025-08-11 17:20:21,869 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:20:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]2025-08-11 17:20:23,330 - agent.ComputerAgent - INFO - Computer: type({'text': '7'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '7'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it] 28.3 steps/min]\n",
+ " 44%|█████████████████-----------------------| 3232/7340 [114:07<145:02, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 44%|█████████████████-----------------------| 3232/7340 [114:08<145:04, 28.3 steps/min]\u001b[92m17:20:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:20:27,565 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 142})\n",
+ " 44%|█████████████████-----------------------| 3232/7340 [114:09<145:05, 28.3 steps/min]\u001b[92m17:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:20:28,215 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 181})\n",
+ "\u001b[92m17:20:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:28,874 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:20:28,874 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 402})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:30,217 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3233/7340 [114:11<145:04, 28.3 steps/min]2025-08-11 17:20:30,912 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 131, 'y': 180}, {'x': 131, 'y': 233}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 131, 'y': 180}, {'x': 131, 'y': 233}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:32,842 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:34,798 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 44%|█████████████████-----------------------| 3235/7340 [114:16<145:00, 28.3 steps/min]\u001b[92m17:20:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:20:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:20:36,096 - agent.ComputerAgent - INFO - Computer: type({'text': 'python --version\\nwhich python\\npython3 --version\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'python --version\\nwhich python\\npython3 --version\\n'})\n",
+ "2025-08-11 17:20:36,737 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 128})\n",
+ "\u001b[92m17:20:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3237/7340 [114:18<144:53, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:20:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:37,381 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:20:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:20:38,011 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 436})\n",
+ "2025-08-11 17:20:38,707 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 91})\n",
+ " 44%|█████████████████-----------------------| 3239/7340 [114:20<144:46, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/55d96e84-fdd8-4854-8eca-195f1a84dbc3/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3241/7340 [114:21<144:38, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:20:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:41,212 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 64})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3241/7340 [114:22<144:39, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3242/7340 [114:23<144:36, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:44,035 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3242/7340 [114:25<144:38, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:44,665 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:20:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:45,307 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:20:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:20:45,996 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:20:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3243/7340 [114:27<144:36, 28.3 steps/min]2025-08-11 17:20:46,669 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:20:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b28701c2-0fa4-4b07-bace-735fd2133893/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:20:47,323 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3243/7340 [114:29<144:37, 28.3 steps/min]2025-08-11 17:20:47,995 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:20:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:20:50,037 - agent.ComputerAgent - INFO - Agent: I’m blocked by Google’s 2‑Step Verification on your account. The current screen only allows:\n",
+ "- Entering a code from the Google Authenticator app, or\n",
+ "- “Get help,” which can take several days.\n",
+ "\n",
+ "This prevents me from accessing Google Drive to locate the guidebook and create the Google Doc in /environment_policy.\n",
+ "\n",
+ "Please do one of the following to unblock access:\n",
+ "- Open your Google Authenticator and enter the current code in the browser window that’s open, or\n",
+ "- Approve the login if you receive a prompt on your device, or\n",
+ "- Temporarily disable 2‑Step Verification or add a sign‑in method that provides an immediate code.\n",
+ "\n",
+ "Once the sign‑in completes, I will immediately:\n",
+ "1) Open Google Drive and navigate to /environment_policy.\n",
+ "2) Create “environment_policy_report (draft)” as a Google Doc.\n",
+ "3) Find the Green Economy guidebook in your Drive, extract the exact Introduction section, and paste it into the doc with clean formatting.\n",
+ "4) Save it in the /environment_policy folder.\n",
+ "\n",
+ "I’ll proceed as soon as access is granted.\n",
+ "INFO:agent.ComputerAgent:Agent: I’m blocked by Google’s 2‑Step Verification on your account. The current screen only allows:\n",
+ "- Entering a code from the Google Authenticator app, or\n",
+ "- “Get help,” which can take several days.\n",
+ "\n",
+ "This prevents me from accessing Google Drive to locate the guidebook and create the Google Doc in /environment_policy.\n",
+ "\n",
+ "Please do one of the following to unblock access:\n",
+ "- Open your Google Authenticator and enter the current code in the browser window that’s open, or\n",
+ "- Approve the login if you receive a prompt on your device, or\n",
+ "- Temporarily disable 2‑Step Verification or add a sign‑in method that provides an immediate code.\n",
+ "\n",
+ "Once the sign‑in completes, I will immediately:\n",
+ "1) Open Google Drive and navigate to /environment_policy.\n",
+ "2) Create “environment_policy_report (draft)” as a Google Doc.\n",
+ "3) Find the Green Economy guidebook in your Drive, extract the exact Introduction section, and paste it into the doc with clean formatting.\n",
+ "4) Save it in the /environment_policy folder.\n",
+ "\n",
+ "I’ll proceed as soon as access is granted.\n",
+ "2025-08-11 17:20:50,668 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1897\n",
+ " - prompt_tokens: 2875\n",
+ " - total_tokens: 4772\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1664\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0226\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1897\n",
+ " - prompt_tokens: 2875\n",
+ " - total_tokens: 4772\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1664\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0226\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3244/7340 [114:33<144:38, 28.3 steps/min]2025-08-11 17:20:51,965 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:20:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:20:52,623 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:20:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3244/7340 [114:34<144:39, 28.3 steps/min]2025-08-11 17:20:53,274 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:20:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:20:53,953 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:20:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<144:41, 28.3 steps/min]2025-08-11 17:20:54,633 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:20:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3244/7340 [114:36<144:42, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 17:20:56,005 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:20:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3244/7340 [114:37<144:44, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it] 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:20:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m17:20:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0190121-650c-4779-b26d-2480f313dc84/close \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3249/7340 [114:40<144:23, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "\u001b[92m17:20:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:01,570 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 44%|█████████████████-----------------------| 3249/7340 [114:43<144:27, 28.3 steps/min]\u001b[92m17:21:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:02,210 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 280})\n",
+ "\u001b[92m17:21:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:02,875 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:21:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:03,505 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "2025-08-11 17:21:04,152 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 563})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 563})\n",
+ " 44%|█████████████████-----------------------| 3249/7340 [114:45<144:30, 28.3 steps/min]\u001b[92m17:21:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:04,807 - agent.ComputerAgent - INFO - Computer: click({'x': 611, 'y': 274})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 611, 'y': 274})\n",
+ "2025-08-11 17:21:05,496 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:21:05,496 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 753})\n",
+ "2025-08-11 17:21:06,189 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 126})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 126})\n",
+ " 44%|█████████████████-----------------------| 3252/7340 [114:47<144:18, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3255/7340 [114:50<144:08, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:11,094 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3255/7340 [114:53<144:11, 28.3 steps/min]\u001b[92m17:21:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:21:12,435 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:21:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:21:13,131 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:21:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.75s/it]2025-08-11 17:21:13,917 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:21:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3255/7340 [114:55<144:14, 28.3 steps/min]2025-08-11 17:21:14,558 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:21:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 17:21:15,477 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:21:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 44%|█████████████████-----------------------| 3255/7340 [114:57<144:15, 28.3 steps/min]2025-08-11 17:21:16,135 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:21:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 17:21:17,717 - agent.ComputerAgent - INFO - Computer: type({'text': '=DATEDIF(C2;TODAY();\"Y\")'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=DATEDIF(C2;TODAY();\"Y\")'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 44%|█████████████████-----------------------| 3255/7340 [115:00<144:20, 28.3 steps/min]\u001b[92m17:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:20,981 - agent.ComputerAgent - INFO - Agent: I cannot set the default Python to “Python4” because Python 4 does not exist in Ubuntu’s repositories (or upstream) yet.\n",
+ "\n",
+ "What I did to ensure a sensible default:\n",
+ "- Installed python-is-python3 so that the python command points to Python 3.\n",
+ "- Verified:\n",
+ " - python --version → Python 3.10.12\n",
+ " - which python → /usr/bin/python3\n",
+ " - python3 --version → Python 3.10.12\n",
+ "\n",
+ "If/when Python 4 becomes available, you can switch defaults with:\n",
+ "- sudo update-alternatives --install /usr/bin/python python /usr/bin/python4 X\n",
+ "- sudo update-alternatives --config python\n",
+ "\n",
+ "Optional next steps (if you want newer Python 3 now):\n",
+ "- Install a newer Python 3 from the deadsnakes PPA or use pyenv, then set it as default with update-alternatives or per-shell via pyenv.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I cannot set the default Python to “Python4” because Python 4 does not exist in Ubuntu’s repositories (or upstream) yet.\n",
+ "\n",
+ "What I did to ensure a sensible default:\n",
+ "- Installed python-is-python3 so that the python command points to Python 3.\n",
+ "- Verified:\n",
+ " - python --version → Python 3.10.12\n",
+ " - which python → /usr/bin/python3\n",
+ " - python3 --version → Python 3.10.12\n",
+ "\n",
+ "If/when Python 4 becomes available, you can switch defaults with:\n",
+ "- sudo update-alternatives --install /usr/bin/python python /usr/bin/python4 X\n",
+ "- sudo update-alternatives --config python\n",
+ "\n",
+ "Optional next steps (if you want newer Python 3 now):\n",
+ "- Install a newer Python 3 from the deadsnakes PPA or use pyenv, then set it as default with update-alternatives or per-shell via pyenv.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:21:21,655 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 780\n",
+ " - prompt_tokens: 9066\n",
+ " - total_tokens: 9846\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0191\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 780\n",
+ " - prompt_tokens: 9066\n",
+ " - total_tokens: 9846\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 576\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0191\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:23,346 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3257/7340 [115:05<144:17, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:24,627 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 33})\n",
+ "2025-08-11 17:21:25,317 - agent.ComputerAgent - INFO - Computer: click({'x': 491, 'y': 368})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 491, 'y': 368})\n",
+ "\u001b[92m17:21:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:25,975 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:21:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:21:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:26,652 - agent.ComputerAgent - INFO - Computer: click({'x': 589, 'y': 515})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 589, 'y': 515})\n",
+ " 44%|█████████████████-----------------------| 3257/7340 [115:08<144:20, 28.3 steps/min]2025-08-11 17:21:27,298 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 629}, {'x': 562, 'y': 261}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 629}, {'x': 562, 'y': 261}]})\n",
+ "\u001b[92m17:21:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:27,967 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:21:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 44%|█████████████████-----------------------| 3260/7340 [115:09<144:07, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:29,116 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 125, 'y': 185}, {'x': 278, 'y': 579}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 125, 'y': 185}, {'x': 278, 'y': 579}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 44%|█████████████████-----------------------| 3261/7340 [115:11<144:05, 28.3 steps/min]\u001b[92m17:21:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:21:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:30,942 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ " 44%|█████████████████-----------------------| 3262/7340 [115:12<144:01, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3273/7340 [115:13<143:10, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96765d66-53fb-41dd-99b6-cd96984e52b3/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<143:12, 28.4 steps/min]2025-08-11 17:21:33,952 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:21:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]2025-08-11 17:21:35,429 - agent.ComputerAgent - INFO - Computer: type({'text': 'test.py'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'test.py'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3273/7340 [115:17<143:16, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]2025-08-11 17:21:36,947 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:21:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:37,608 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:21:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3274/7340 [115:19<143:13, 28.4 steps/min]2025-08-11 17:21:38,417 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:21:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 17:21:39,109 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:21:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:40,734 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 45%|█████████████████-----------------------| 3274/7340 [115:22<143:17, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:42,749 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 45%|█████████████████-----------------------| 3274/7340 [115:24<143:19, 28.4 steps/min]\u001b[92m17:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:21:43,401 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 596})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 596})\n",
+ "2025-08-11 17:21:44,047 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 149})\n",
+ "\u001b[92m17:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3274/7340 [115:25<143:21, 28.4 steps/min]2025-08-11 17:21:44,703 - agent.ComputerAgent - INFO - Computer: click({'x': 352, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 352, 'y': 202})\n",
+ "2025-08-11 17:21:45,353 - agent.ComputerAgent - INFO - Computer: click({'x': 358, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 358, 'y': 76})\n",
+ " 45%|█████████████████-----------------------| 3276/7340 [115:27<143:13, 28.4 steps/min]2025-08-11 17:21:46,015 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:21:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3278/7340 [115:28<143:05, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:47,857 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3279/7340 [115:30<143:03, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f1aa688-383a-44cc-9959-87e1e32dd55c/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:50,142 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3279/7340 [115:31<143:05, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:51,335 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:21:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:52,627 - agent.ComputerAgent - INFO - Computer: type({'text': '\"Guidebook to the Green Economy\" Introduction'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\"Guidebook to the Green Economy\" Introduction'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3279/7340 [115:34<143:08, 28.4 steps/min]2025-08-11 17:21:53,274 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:53,915 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3280/7340 [115:36<143:05, 28.4 steps/min]\u001b[92m17:21:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:21:55,259 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:21:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:21:55,917 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:21:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3280/7340 [115:38<143:08, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:21:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3280/7340 [115:39<143:10, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:21:59,462 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'backspace'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'backspace'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 17:22:01,586 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 45%|█████████████████-----------------------| 3280/7340 [115:43<143:14, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:02,268 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:22:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 17:22:03,052 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:22:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 28.3 steps/min]\n",
+ " 45%|█████████████████-----------------------| 3281/7340 [115:45<143:12, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 45%|█████████████████-----------------------| 3281/7340 [115:46<143:14, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:05,726 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:22:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:22:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:06,387 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 637})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 637})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3281/7340 [115:48<143:16, 28.3 steps/min]\u001b[92m17:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:07,739 - agent.ComputerAgent - INFO - Computer: click({'x': 529, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 529, 'y': 101})\n",
+ "\u001b[92m17:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:08,380 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 115})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:11,051 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3282/7340 [115:52<143:16, 28.3 steps/min]\u001b[92m17:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:11,741 - agent.ComputerAgent - INFO - Computer: click({'x': 363, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 363, 'y': 123})\n",
+ "\u001b[92m17:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:12,375 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:22:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:22:13,027 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 64})\n",
+ "\u001b[92m17:22:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3285/7340 [115:54<143:04, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:13,690 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3287/7340 [115:56<142:57, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:15,719 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:22:15,720 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "\u001b[92m17:22:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3288/7340 [115:57<142:54, 28.4 steps/min]2025-08-11 17:22:16,390 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 595})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:17,723 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 17:22:18,352 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:22:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3288/7340 [116:00<142:58, 28.3 steps/min]2025-08-11 17:22:19,668 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:20,340 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:22:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:22:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:20,989 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:22:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3289/7340 [116:02<142:55, 28.3 steps/min]2025-08-11 17:22:21,674 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 174})\n",
+ "2025-08-11 17:22:22,325 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:22:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3289/7340 [116:04<142:57, 28.3 steps/min]2025-08-11 17:22:22,977 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:22:23,668 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3290/7340 [116:05<142:54, 28.3 steps/min]2025-08-11 17:22:24,720 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:22:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:26,087 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:27,441 - agent.ComputerAgent - INFO - Computer: type({'text': 'do not track'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'do not track'})\n",
+ " 45%|█████████████████-----------------------| 3290/7340 [116:09<142:59, 28.3 steps/min]2025-08-11 17:22:28,115 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:22:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:29,459 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3291/7340 [116:11<142:56, 28.3 steps/min]2025-08-11 17:22:30,111 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:22:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:22:30,787 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:22:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3292/7340 [116:12<142:53, 28.3 steps/min]2025-08-11 17:22:31,439 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:22:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/94463065-a78e-479a-b964-45ad23a48cbb/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3292/7340 [116:14<142:56, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 45%|█████████████████-----------------------| 3292/7340 [116:15<142:57, 28.3 steps/min]\u001b[92m17:22:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:34,769 - agent.ComputerAgent - INFO - Computer: click({'x': 67, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 67, 'y': 156})\n",
+ " 45%|█████████████████-----------------------| 3292/7340 [116:16<142:58, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:36,088 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:22:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:22:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3293/7340 [116:17<142:55, 28.3 steps/min]2025-08-11 17:22:36,762 - agent.ComputerAgent - INFO - Computer: click({'x': 395, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 395, 'y': 95})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:37,400 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:22:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3293/7340 [116:19<142:57, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:38,740 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:40,048 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+f12'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+f12'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:41,344 - agent.ComputerAgent - INFO - Computer: type({'text': '7'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '7'})\n",
+ " 45%|█████████████████-----------------------| 3294/7340 [116:23<142:57, 28.3 steps/min]2025-08-11 17:22:42,026 - agent.ComputerAgent - INFO - Computer: click({'x': 97, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 97, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:42,707 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:22:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3295/7340 [116:26<142:56, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:22:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:47,409 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "\u001b[92m17:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3296/7340 [116:29<142:55, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:48,068 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 149})\n",
+ "2025-08-11 17:22:48,731 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 197})\n",
+ "2025-08-11 17:22:49,419 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 180})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 180})\n",
+ "\u001b[92m17:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:50,108 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ " 45%|█████████████████-----------------------| 3296/7340 [116:31<142:58, 28.3 steps/min]\u001b[92m17:22:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:22:50,754 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:22:50,755 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 548})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 548})\n",
+ "2025-08-11 17:22:51,426 - agent.ComputerAgent - INFO - Computer: click({'x': 403, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 403, 'y': 595})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:22:52,749 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:22:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|█████████████████-----------------------| 3299/7340 [116:34<142:47, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:22:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:22:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:22:54,060 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 149})\n",
+ "\u001b[92m17:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|█████████████████-----------------------| 3301/7340 [116:35<142:39, 28.3 steps/min]2025-08-11 17:22:54,740 - agent.ComputerAgent - INFO - Computer: click({'x': 532, 'y': 671})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 532, 'y': 671})\n",
+ "2025-08-11 17:22:55,406 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:22:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|█████████████████-----------------------| 3302/7340 [116:37<142:36, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3303/7340 [116:39<142:34, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:58,608 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3303/7340 [116:40<142:35, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:22:59,273 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:22:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:22:59,937 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:22:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:01,659 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:03,020 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 45%|██████████████████----------------------| 3303/7340 [116:44<142:41, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fde8bca8-8a90-4fed-b46f-c24829445665/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:23:04,350 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:05,709 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 45%|██████████████████----------------------| 3303/7340 [116:47<142:44, 28.3 steps/min]2025-08-11 17:23:06,352 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 34})\n",
+ "2025-08-11 17:23:07,009 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:23:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:07,698 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:23:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:09,033 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:23:09,035 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 45%|██████████████████----------------------| 3303/7340 [116:50<142:48, 28.3 steps/min]2025-08-11 17:23:09,674 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:23:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:10,316 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3304/7340 [116:52<142:45, 28.3 steps/min]2025-08-11 17:23:10,999 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:23:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:11,682 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:23:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3304/7340 [116:53<142:47, 28.3 steps/min]2025-08-11 17:23:12,377 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:23:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:13,030 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3304/7340 [116:54<142:48, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3306/7340 [116:55<142:40, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35bb6fb7-5b34-473c-a541-13215a694bc6/close \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3306/7340 [116:56<142:41, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:15,901 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:23:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3319/7340 [116:57<141:42, 28.4 steps/min]2025-08-11 17:23:16,976 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:23:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ae2379a3-a039-4954-afc2-582f8ebffdd2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:18,269 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:19,581 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 45%|██████████████████----------------------| 3319/7340 [117:01<141:46, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:23:21,509 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:23:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:22,163 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:23:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3319/7340 [117:04<141:50, 28.3 steps/min]\u001b[92m17:23:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]\u001b[92m17:23:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|██████████████████----------------------| 3319/7340 [117:06<141:52, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]\u001b[92m17:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 17:23:27,535 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:23:28,208 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 6028\n",
+ " - total_tokens: 6231\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0096\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 6028\n",
+ " - total_tokens: 6231\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0096\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3320/7340 [117:10<141:52, 28.3 steps/min]\u001b[92m17:23:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:23:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 45%|██████████████████----------------------| 3320/7340 [117:12<141:55, 28.3 steps/min]\u001b[92m17:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:23:31,796 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:23:33,138 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+k ctrl+s'})\n",
+ "\u001b[92m17:23:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:23:33,783 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 101})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:23:34,447 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 100})\n",
+ "\u001b[92m17:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|██████████████████----------------------| 3320/7340 [117:16<141:59, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:23:35,135 - agent.ComputerAgent - INFO - Computer: double_click({'x': 217, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 217, 'y': 185})\n",
+ "\u001b[92m17:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:23:35,838 - agent.ComputerAgent - INFO - Computer: click({'x': 517, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 517, 'y': 129})\n",
+ "2025-08-11 17:23:36,470 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:23:36,472 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 713})\n",
+ "2025-08-11 17:23:37,141 - agent.ComputerAgent - INFO - Computer: click({'x': 372, 'y': 258})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 372, 'y': 258})\n",
+ "2025-08-11 17:23:37,799 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:23:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:38,446 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 291})\n",
+ "2025-08-11 17:23:39,115 - agent.ComputerAgent - INFO - Computer: click({'x': 118, 'y': 296})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 118, 'y': 296})\n",
+ "2025-08-11 17:23:39,772 - agent.ComputerAgent - INFO - Computer: click({'x': 113, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 113, 'y': 268})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 45%|██████████████████----------------------| 3323/7340 [117:22<141:52, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:23:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:23:41,596 - agent.ComputerAgent - INFO - Computer: click({'x': 453, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 453, 'y': 280})\n",
+ " 45%|██████████████████----------------------| 3330/7340 [117:23<141:21, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 45%|██████████████████----------------------| 3331/7340 [117:24<141:18, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:43,783 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:23:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 45%|██████████████████----------------------| 3331/7340 [117:25<141:19, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:23:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 46%|██████████████████----------------------| 3344/7340 [117:26<140:20, 28.5 steps/min]\u001b[92m17:23:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:23:45,985 - agent.ComputerAgent - INFO - Computer: click({'x': 352, 'y': 221})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 352, 'y': 221})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8dab132d-f531-4969-ab0d-ec9431c5c5e8/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3344/7340 [117:27<140:21, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:47,330 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:23:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:48,019 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:48,701 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:30<140:20, 28.5 steps/min]2025-08-11 17:23:49,390 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:23:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:50,074 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:50,727 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:32<140:22, 28.5 steps/min]2025-08-11 17:23:51,350 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:23:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:52,009 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:52,691 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:35<140:26, 28.4 steps/min]\u001b[92m17:23:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:23:54,769 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:36<140:27, 28.4 steps/min]2025-08-11 17:23:55,411 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:23:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:23:56,053 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:23:56,691 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:38<140:30, 28.4 steps/min]2025-08-11 17:23:57,370 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:23:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:40<140:32, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]\u001b[92m17:24:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:41<140:34, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it] 28.4 steps/min]\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:43<140:36, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:24:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:04,825 - agent.ComputerAgent - INFO - Computer: type({'text': 'do not track'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'do not track'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3345/7340 [117:47<140:40, 28.4 steps/min]\u001b[92m17:24:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:06,152 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:24:06,152 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:24:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:07,503 - agent.ComputerAgent - INFO - Computer: click({'x': 630, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 630, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 46%|██████████████████----------------------| 3346/7340 [117:49<140:39, 28.4 steps/min]\u001b[92m17:24:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:08,848 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 148})\n",
+ "2025-08-11 17:24:09,510 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 550})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 550})\n",
+ "\u001b[92m17:24:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:10,159 - agent.ComputerAgent - INFO - Computer: click({'x': 97, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 97, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3348/7340 [117:53<140:33, 28.4 steps/min]\u001b[92m17:24:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:24:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:12,680 - agent.ComputerAgent - INFO - Computer: click({'x': 443, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 443, 'y': 281})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3351/7340 [117:55<140:22, 28.4 steps/min]\u001b[92m17:24:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:14,006 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 333})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:15,332 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 270, 'y': 306}, {'x': 349, 'y': 306}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 270, 'y': 306}, {'x': 349, 'y': 306}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3352/7340 [117:58<140:21, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:17,316 - agent.ComputerAgent - INFO - Computer: click({'x': 403, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 403, 'y': 595})\n",
+ "2025-08-11 17:24:17,977 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 286})\n",
+ "\u001b[92m17:24:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3354/7340 [117:59<140:13, 28.4 steps/min]2025-08-11 17:24:18,654 - agent.ComputerAgent - INFO - Computer: double_click({'x': 227, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 227, 'y': 193})\n",
+ "2025-08-11 17:24:19,322 - agent.ComputerAgent - INFO - Computer: double_click({'x': 509, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 509, 'y': 153})\n",
+ "2025-08-11 17:24:19,970 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:24:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:20,620 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:24:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3356/7340 [118:02<140:07, 28.4 steps/min]2025-08-11 17:24:21,315 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:24:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:21,960 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:24:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:24:23,314 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:05<140:01, 28.4 steps/min]2025-08-11 17:24:23,940 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:24:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:24:24,601 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:24:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:06<140:03, 28.4 steps/min]2025-08-11 17:24:25,618 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:24:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:07<140:04, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:24:26,340 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:24:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:24:27,029 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:24:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:08<140:06, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:09<140:07, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:28,710 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:24:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:24:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:29,393 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 53})\n",
+ " 46%|██████████████████----------------------| 3358/7340 [118:11<140:08, 28.4 steps/min]2025-08-11 17:24:30,041 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:24:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:30,722 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:24:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3359/7340 [118:12<140:05, 28.4 steps/min]2025-08-11 17:24:31,401 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:24:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:32,072 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:24:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3359/7340 [118:13<140:07, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3359/7340 [118:15<140:09, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:24:34,982 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m17:24:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3359/7340 [118:16<140:10, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:35,675 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 95})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:37,013 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:39,037 - agent.ComputerAgent - INFO - Computer: type({'text': 'Best Paper'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Best Paper'})\n",
+ " 46%|██████████████████----------------------| 3359/7340 [118:20<140:15, 28.4 steps/min]\u001b[92m17:24:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:24:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:40,360 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 101})\n",
+ "\u001b[92m17:24:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:41,005 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 330})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 330})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3361/7340 [118:23<140:09, 28.4 steps/min]\u001b[92m17:24:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:42,340 - agent.ComputerAgent - INFO - Computer: double_click({'x': 507, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 507, 'y': 153})\n",
+ "2025-08-11 17:24:42,993 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:45,030 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3363/7340 [118:27<140:05, 28.4 steps/min]\u001b[92m17:24:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:46,338 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 576})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:46,959 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:24:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:48,333 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 46%|██████████████████----------------------| 3364/7340 [118:30<140:03, 28.4 steps/min]2025-08-11 17:24:48,996 - agent.ComputerAgent - INFO - Computer: click({'x': 365, 'y': 326})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 365, 'y': 326})\n",
+ "\u001b[92m17:24:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:49,680 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 284})\n",
+ " 46%|██████████████████----------------------| 3368/7340 [118:32<139:47, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3368/7340 [118:33<139:49, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:53,176 - agent.ComputerAgent - INFO - Computer: click({'x': 375, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 375, 'y': 595})\n",
+ " 46%|██████████████████----------------------| 3368/7340 [118:34<139:50, 28.4 steps/min]\u001b[92m17:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:54,523 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 345, 'y': 304}, {'x': 347, 'y': 304}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 345, 'y': 304}, {'x': 347, 'y': 304}]})\n",
+ "\u001b[92m17:24:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3369/7340 [118:36<139:47, 28.4 steps/min]2025-08-11 17:24:55,185 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 53})\n",
+ "2025-08-11 17:24:55,853 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:24:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3370/7340 [118:38<139:45, 28.4 steps/min]\u001b[92m17:24:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:57,181 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:24:57,872 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:58,551 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:24:59,230 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:24:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3371/7340 [118:41<139:44, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:24:59,882 - agent.ComputerAgent - INFO - Computer: click({'x': 216, 'y': 469})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 216, 'y': 469})\n",
+ "2025-08-11 17:25:00,512 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:25:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:01,884 - agent.ComputerAgent - INFO - Computer: type({'text': '7'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '7'})\n",
+ " 46%|██████████████████----------------------| 3371/7340 [118:43<139:47, 28.4 steps/min]2025-08-11 17:25:02,571 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:25:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:25:03,230 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:25:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:04,619 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 46%|██████████████████----------------------| 3373/7340 [118:46<139:41, 28.4 steps/min]2025-08-11 17:25:05,281 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3374/7340 [118:47<139:37, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 46%|██████████████████----------------------| 3374/7340 [118:48<139:39, 28.4 steps/min]\u001b[92m17:25:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:07,643 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 163})\n",
+ " 46%|██████████████████----------------------| 3374/7340 [118:49<139:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:08,823 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3375/7340 [118:50<139:37, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:10,020 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 46%|██████████████████----------------------| 3375/7340 [118:51<139:38, 28.4 steps/min]\u001b[92m17:25:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:10,703 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:25:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:11,373 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:25:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3375/7340 [118:53<139:40, 28.4 steps/min]\u001b[92m17:25:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:13,076 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:25:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3375/7340 [118:54<139:42, 28.4 steps/min]2025-08-11 17:25:13,768 - agent.ComputerAgent - INFO - Computer: double_click({'x': 173, 'y': 340})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 173, 'y': 340})\n",
+ "2025-08-11 17:25:14,837 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:25:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3375/7340 [118:56<139:44, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:25:15,502 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:25:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3376/7340 [118:57<139:41, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:25:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:17,373 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 177})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 177})\n",
+ " 46%|██████████████████----------------------| 3376/7340 [118:59<139:42, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3377/7340 [119:00<139:39, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:19,722 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 77})\n",
+ " 46%|██████████████████----------------------| 3377/7340 [119:01<139:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:21,028 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:22,338 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3378/7340 [119:05<139:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:24,955 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3379/7340 [119:06<139:37, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:25,652 - agent.ComputerAgent - INFO - Computer: move({'x': 153, 'y': 68})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 153, 'y': 68})\n",
+ "2025-08-11 17:25:26,325 - agent.ComputerAgent - INFO - Computer: click({'x': 105, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 105, 'y': 197})\n",
+ "\u001b[92m17:25:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/050a0934-63e8-46a0-8868-de32b28174ef/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3379/7340 [119:08<139:39, 28.4 steps/min]2025-08-11 17:25:27,009 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 278})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3381/7340 [119:09<139:31, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:28,351 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:25:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:25:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:29,035 - agent.ComputerAgent - INFO - Computer: click({'x': 906, 'y': 313})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 906, 'y': 313})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:31,002 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 46%|██████████████████----------------------| 3382/7340 [119:12<139:30, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3383/7340 [119:13<139:27, 28.4 steps/min]\u001b[92m17:25:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:32,869 - agent.ComputerAgent - INFO - Computer: click({'x': 443, 'y': 495})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 443, 'y': 495})\n",
+ "\u001b[92m17:25:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:33,550 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 369, 'y': 306}, {'x': 364, 'y': 306}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 369, 'y': 306}, {'x': 364, 'y': 306}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3383/7340 [119:15<139:29, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:34,213 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:25:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:25:34,882 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:25:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3385/7340 [119:16<139:21, 28.4 steps/min]2025-08-11 17:25:35,561 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:25:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:36,212 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:25:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:25:36,903 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:25:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 46%|██████████████████----------------------| 3395/7340 [119:18<138:38, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:39,028 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:25:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:25:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3395/7340 [119:20<138:40, 28.4 steps/min]2025-08-11 17:25:39,712 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:25:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:25:40,406 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 261})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 261})\n",
+ " 46%|██████████████████----------------------| 3395/7340 [119:22<138:42, 28.4 steps/min]2025-08-11 17:25:41,560 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:25:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d351b561-0537-4e9c-84fc-8e1905f2f2c8/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055e9f8b-8c01-4732-8b5f-ef4fc732f122/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3396/7340 [119:24<138:40, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:43,551 - agent.ComputerAgent - INFO - Agent: Attached aws-bill.pdf from your home directory to the email. I did not send or close the email.\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Attached aws-bill.pdf from your home directory to the email. I did not send or close the email.\n",
+ "Task completed.\n",
+ "2025-08-11 17:25:44,202 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 354\n",
+ " - prompt_tokens: 6715\n",
+ " - total_tokens: 7069\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0119\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 354\n",
+ " - prompt_tokens: 6715\n",
+ " - total_tokens: 7069\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0119\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3397/7340 [119:27<138:40, 28.4 steps/min]\u001b[92m17:25:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:25:46,904 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:25:46,905 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 289})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 289})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:25:47,552 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3397/7340 [119:29<138:41, 28.4 steps/min]2025-08-11 17:25:48,728 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:25:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 46%|██████████████████----------------------| 3398/7340 [119:30<138:38, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.79s/it]\u001b[92m17:25:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3398/7340 [119:33<138:41, 28.4 steps/min]\u001b[92m17:25:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:25:52,322 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:25:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it] 28.4 steps/min]\n",
+ "2025-08-11 17:25:53,721 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 46%|██████████████████----------------------| 3398/7340 [119:36<138:44, 28.4 steps/min]\u001b[92m17:25:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3398/7340 [119:37<138:46, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:56,310 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 249})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:56,950 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 301})\n",
+ " 46%|██████████████████----------------------| 3398/7340 [119:38<138:47, 28.4 steps/min]\u001b[92m17:25:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:25:57,616 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:25:57,618 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 524})\n",
+ "2025-08-11 17:25:58,255 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 257})\n",
+ "2025-08-11 17:25:58,926 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 238})\n",
+ "\u001b[92m17:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3400/7340 [119:40<138:41, 28.4 steps/min]2025-08-11 17:25:59,577 - agent.ComputerAgent - INFO - Computer: click({'x': 418, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 418, 'y': 385})\n",
+ "2025-08-11 17:26:00,261 - agent.ComputerAgent - INFO - Computer: click({'x': 51, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 51, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:01,599 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3403/7340 [119:44<138:31, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:03,623 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:26:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:26:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 46%|██████████████████----------------------| 3405/7340 [119:45<138:23, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:26:04,287 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 412})\n",
+ "\u001b[92m17:26:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:04,977 - agent.ComputerAgent - INFO - Computer: click({'x': 97, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 97, 'y': 53})\n",
+ " 46%|██████████████████----------------------| 3407/7340 [119:49<138:19, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3420/7340 [119:50<137:22, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/eb77de2d-4b43-46b5-914e-6fc93a66ecb0/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3420/7340 [119:52<137:23, 28.5 steps/min]2025-08-11 17:26:10,901 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:26:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:11,573 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:26:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:12,222 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:12,883 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3420/7340 [119:54<137:26, 28.5 steps/min]2025-08-11 17:26:13,566 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:26:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:14,201 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:26:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:14,880 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:26:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3420/7340 [119:56<137:28, 28.5 steps/min]2025-08-11 17:26:15,563 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:26:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:16,251 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:26:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3420/7340 [120:01<137:33, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:21,148 - agent.ComputerAgent - INFO - Computer: type({'text': 'Best Long Paper'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Best Long Paper'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3420/7340 [120:03<137:36, 28.5 steps/min]\u001b[92m17:26:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]2025-08-11 17:26:24,670 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3421/7340 [120:08<137:37, 28.5 steps/min]\u001b[92m17:26:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "\u001b[92m17:26:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:28,339 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:26:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:26:30,387 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 47%|██████████████████----------------------| 3421/7340 [120:12<137:41, 28.5 steps/min]\u001b[92m17:26:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:26:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:31,738 - agent.ComputerAgent - INFO - Computer: click({'x': 693, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 693, 'y': 130})\n",
+ "2025-08-11 17:26:32,387 - agent.ComputerAgent - INFO - Computer: click({'x': 327, 'y': 549})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 327, 'y': 549})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:33,715 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:26:34,362 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 418})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 47%|██████████████████----------------------| 3421/7340 [120:16<137:47, 28.4 steps/min]\u001b[92m17:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:35,709 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 174})\n",
+ "\u001b[92m17:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:26:37,040 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\n'})\n",
+ "2025-08-11 17:26:37,668 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 306})\n",
+ "2025-08-11 17:26:38,342 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 52})\n",
+ "\u001b[92m17:26:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3424/7340 [120:20<137:37, 28.5 steps/min]2025-08-11 17:26:39,005 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 122})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 122})\n",
+ " 47%|██████████████████----------------------| 3429/7340 [120:23<137:18, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3429/7340 [120:25<137:20, 28.5 steps/min]\u001b[92m17:26:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:26:44,726 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m17:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/730002fc-5760-41b0-97b8-f6783353a242/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3434/7340 [120:26<136:59, 28.5 steps/min]2025-08-11 17:26:45,400 - agent.ComputerAgent - INFO - Computer: click({'x': 468, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 468, 'y': 101})\n",
+ "2025-08-11 17:26:46,080 - agent.ComputerAgent - INFO - Computer: click({'x': 124, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 124, 'y': 270})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:47,367 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ " 47%|██████████████████----------------------| 3434/7340 [120:29<137:02, 28.5 steps/min]\u001b[92m17:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:48,015 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:48,684 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:49,374 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:26:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:26:50,020 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:26:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:26:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:32<136:57, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:26:51,394 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:26:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:52,085 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:26:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:33<136:59, 28.5 steps/min]2025-08-11 17:26:52,757 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:26:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:26:53,416 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:26:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:35<137:00, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it] 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:26:55,905 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:26:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it] 28.5 steps/min]2025-08-11 17:26:56,567 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:26:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:26:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:39<137:04, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:26:58,614 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:26:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:40<137:06, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:41<137:07, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:42<137:08, 28.5 steps/min]\u001b[92m17:27:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:01,383 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 53})\n",
+ "\u001b[92m17:27:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:02,032 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 418})\n",
+ " 47%|██████████████████----------------------| 3436/7340 [120:43<137:10, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:03,876 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n n = len(arr)\\n for i in range(n):\\n swapped = False\\n for j in range(0, n - i - 1):\\n if arr[j] > arr[j + 1]:\\n arr[j], arr[j + 1] = arr[j + 1], arr[j]\\n swapped = True\\n if not swapped:\\n break'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n n = len(arr)\\n for i in range(n):\\n swapped = False\\n for j in range(0, n - i - 1):\\n if arr[j] > arr[j + 1]:\\n arr[j], arr[j + 1] = arr[j + 1], arr[j]\\n swapped = True\\n if not swapped:\\n break'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3439/7340 [120:46<137:00, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3439/7340 [120:47<137:01, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:07,882 - agent.ComputerAgent - INFO - Computer: type({'text': 'libreoffice --headless --convert-to pdf --outdir . -- *.doc\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'libreoffice --headless --convert-to pdf --outdir . -- *.doc\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:27:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3439/7340 [120:50<137:04, 28.5 steps/min]2025-08-11 17:27:09,249 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:27:09,250 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 491})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 491})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:09,926 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:27:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3440/7340 [120:52<137:02, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:11,254 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:12,616 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:13,257 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:27:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:14,573 - agent.ComputerAgent - INFO - Computer: type({'text': 'do not track'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'do not track'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:15,903 - agent.ComputerAgent - INFO - Computer: type({'text': '30'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '30'})\n",
+ " 47%|██████████████████----------------------| 3441/7340 [120:57<137:03, 28.4 steps/min]2025-08-11 17:27:16,549 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:27:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:27:17,244 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 329})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 329})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3444/7340 [120:59<136:52, 28.5 steps/min]\u001b[92m17:27:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:18,557 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:19,249 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3445/7340 [121:00<136:49, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3446/7340 [121:02<136:46, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:21,579 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 105})\n",
+ " 47%|██████████████████----------------------| 3446/7340 [121:03<136:47, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3447/7340 [121:04<136:44, 28.5 steps/min]2025-08-11 17:27:23,268 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:27:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:23,935 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:27:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3447/7340 [121:05<136:45, 28.5 steps/min]2025-08-11 17:27:24,567 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:27:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:27:25,196 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3447/7340 [121:07<136:47, 28.5 steps/min]2025-08-11 17:27:25,887 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3447/7340 [121:08<136:48, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:27,295 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:27:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:27,985 - agent.ComputerAgent - INFO - Computer: click({'x': 262, 'y': 479})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 262, 'y': 479})\n",
+ " 47%|██████████████████----------------------| 3447/7340 [121:09<136:50, 28.4 steps/min]2025-08-11 17:27:28,643 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:27:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3448/7340 [121:11<136:47, 28.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:30,354 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:27:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3448/7340 [121:12<136:48, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:31,027 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 134})\n",
+ " 47%|██████████████████----------------------| 3449/7340 [121:14<136:46, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:33,880 - agent.ComputerAgent - INFO - Agent: Added the Dissolve slide transition to the first slide and saved the presentation.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Added the Dissolve slide transition to the first slide and saved the presentation.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:27:34,565 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 475\n",
+ " - prompt_tokens: 6691\n",
+ " - total_tokens: 7166\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0079\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 475\n",
+ " - prompt_tokens: 6691\n",
+ " - total_tokens: 7166\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 4608\n",
+ " - response_cost: $0.0079\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3450/7340 [121:17<136:45, 28.4 steps/min]\u001b[92m17:27:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:35,955 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:27:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:36,603 - agent.ComputerAgent - INFO - Computer: move({'x': 166, 'y': 68})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 166, 'y': 68})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3450/7340 [121:18<136:46, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:37,265 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:27:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:27:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3451/7340 [121:19<136:43, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:27:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:39,152 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 158})\n",
+ " 47%|██████████████████----------------------| 3451/7340 [121:20<136:44, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:41,144 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 47%|██████████████████----------------------| 3452/7340 [121:23<136:43, 28.4 steps/min]\u001b[92m17:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:42,465 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:27:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:43,111 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 145})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 145})\n",
+ "\u001b[92m17:27:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3452/7340 [121:24<136:45, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:27:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:44,178 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:27:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:45,504 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3453/7340 [121:27<136:43, 28.4 steps/min]2025-08-11 17:27:46,827 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 86, 'y': 123}, {'x': 83, 'y': 250}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 86, 'y': 123}, {'x': 83, 'y': 250}]})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:27:47,486 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3466/7340 [121:29<135:47, 28.5 steps/min]\u001b[92m17:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:48,140 - agent.ComputerAgent - INFO - Computer: click({'x': 225, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 225, 'y': 564})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:49,491 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -1 *.doc\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -1 *.doc\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a4b4d291-1fca-4038-8670-448014a55182/reset \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3467/7340 [121:31<135:45, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3b14802-9f99-46f5-8fa9-9661af7a973d/close \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:32<135:38, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:33<135:39, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:27:53,445 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:27:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<135:40, 28.5 steps/min]2025-08-11 17:27:54,575 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:27:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:36<135:42, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:27:56,145 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m17:27:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:27:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:38<135:44, 28.5 steps/min]2025-08-11 17:27:57,737 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:39<135:45, 28.5 steps/min]2025-08-11 17:27:58,744 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:27:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:27:59,890 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:42<135:48, 28.5 steps/min]\u001b[92m17:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:44<135:51, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:04,234 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 133})\n",
+ "\u001b[92m17:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/reset \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3469/7340 [121:45<135:52, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:04,896 - agent.ComputerAgent - INFO - Computer: click({'x': 517, 'y': 99})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 517, 'y': 99})\n",
+ "\u001b[92m17:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:05,536 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 166})\n",
+ "\u001b[92m17:28:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3470/7340 [121:47<135:49, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:06,212 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 163})\n",
+ "2025-08-11 17:28:06,857 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 359})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 359})\n",
+ "\u001b[92m17:28:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3472/7340 [121:49<135:42, 28.5 steps/min]2025-08-11 17:28:08,123 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:28:08,123 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 143})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 47%|██████████████████----------------------| 3474/7340 [121:50<135:35, 28.5 steps/min]\u001b[92m17:28:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:09,345 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:10,652 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3475/7340 [121:52<135:33, 28.5 steps/min]2025-08-11 17:28:11,306 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:28:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3477/7340 [121:53<135:25, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 47%|██████████████████----------------------| 3477/7340 [121:54<135:26, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:13,527 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:28:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:14,215 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:28:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3477/7340 [121:55<135:28, 28.5 steps/min]2025-08-11 17:28:15,604 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:28:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:16,287 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:28:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3477/7340 [121:58<135:30, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:16,938 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:17,607 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:18,975 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 47%|██████████████████----------------------| 3477/7340 [122:00<135:33, 28.5 steps/min]2025-08-11 17:28:20,020 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:21,421 - agent.ComputerAgent - INFO - Computer: type({'text': 'soffice --headless --convert-to pdf --outdir . *.doc\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'soffice --headless --convert-to pdf --outdir . *.doc\\n'})\n",
+ " 47%|██████████████████----------------------| 3478/7340 [122:03<135:31, 28.5 steps/min]2025-08-11 17:28:22,564 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:28:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3479/7340 [122:06<135:30, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 47%|██████████████████----------------------| 3479/7340 [122:07<135:31, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:26,425 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 426})\n",
+ " 47%|██████████████████----------------------| 3479/7340 [122:08<135:33, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3480/7340 [122:09<135:29, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:28,266 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:28,948 - agent.ComputerAgent - INFO - Computer: click({'x': 337, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 337, 'y': 306})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:31,589 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:28:31,590 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+o'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+o'})\n",
+ " 47%|██████████████████----------------------| 3480/7340 [122:13<135:34, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:32,984 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:34,373 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:35,667 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 47%|██████████████████----------------------| 3481/7340 [122:17<135:34, 28.5 steps/min]\u001b[92m17:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:36,313 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 415})\n",
+ "2025-08-11 17:28:36,999 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 128})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:37,652 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3482/7340 [122:20<135:32, 28.5 steps/min]2025-08-11 17:28:38,998 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -583, 'scroll_x': 0, 'x': 518, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -583, 'scroll_x': 0, 'x': 518, 'y': 432})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:40,053 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 47%|██████████████████----------------------| 3484/7340 [122:21<135:25, 28.5 steps/min]2025-08-11 17:28:40,735 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 371})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 371})\n",
+ "2025-08-11 17:28:41,407 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:28:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 47%|██████████████████----------------------| 3485/7340 [122:23<135:22, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:43,807 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+1'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+1'})\n",
+ " 47%|██████████████████----------------------| 3486/7340 [122:25<135:20, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:45,113 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:45,783 - agent.ComputerAgent - INFO - Computer: click({'x': 185, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 185, 'y': 213})\n",
+ " 47%|██████████████████----------------------| 3486/7340 [122:27<135:23, 28.5 steps/min]\u001b[92m17:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:46,824 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 271})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3487/7340 [122:28<135:19, 28.5 steps/min]2025-08-11 17:28:47,430 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:28:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:48,066 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:48,718 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:49,358 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:28:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3488/7340 [122:31<135:18, 28.5 steps/min]2025-08-11 17:28:50,387 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:28:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3488/7340 [122:32<135:19, 28.5 steps/min]2025-08-11 17:28:51,058 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3488/7340 [122:33<135:20, 28.5 steps/min]2025-08-11 17:28:52,218 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:28:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:28:54,596 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 48%|███████████████████---------------------| 3488/7340 [122:36<135:24, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:28:55,239 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:28:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:57,210 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:28:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:28:57,887 - agent.ComputerAgent - INFO - Computer: click({'x': 956, 'y': 132})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 956, 'y': 132})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:28:59,873 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://extensions'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://extensions'})\n",
+ " 48%|███████████████████---------------------| 3488/7340 [122:41<135:29, 28.4 steps/min]\u001b[92m17:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "ERROR:asyncio:Unclosed client session\n",
+ "client_session: \n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:29:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:01,218 - agent.ComputerAgent - INFO - Computer: click({'x': 521, 'y': 422})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 521, 'y': 422})\n",
+ "2025-08-11 17:29:01,891 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 335})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:02,530 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3490/7340 [122:44<135:23, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:03,204 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:29:03,205 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 960, 'y': 713})\n",
+ "2025-08-11 17:29:04,634 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -646, 'scroll_x': 0, 'x': 890, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -646, 'scroll_x': 0, 'x': 890, 'y': 760})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3492/7340 [122:47<135:18, 28.4 steps/min]\u001b[92m17:29:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:05,965 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:29:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:06,998 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:29:06,998 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:08,322 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 48%|███████████████████---------------------| 3496/7340 [122:54<135:08, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:13,557 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:29:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3496/7340 [122:55<135:09, 28.4 steps/min]2025-08-11 17:29:14,241 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:14,878 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:29:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:15,558 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:29:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3496/7340 [122:57<135:12, 28.4 steps/min]\u001b[92m17:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:16,889 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:17,556 - agent.ComputerAgent - INFO - Computer: click({'x': 599, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 599, 'y': 760})\n",
+ " 48%|███████████████████---------------------| 3496/7340 [122:59<135:13, 28.4 steps/min]2025-08-11 17:29:18,237 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:18,917 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3497/7340 [123:00<135:10, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:19,548 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3497/7340 [123:01<135:12, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3497/7340 [123:03<135:14, 28.4 steps/min]\u001b[92m17:29:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:23,251 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3497/7340 [123:06<135:17, 28.4 steps/min]\u001b[92m17:29:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:25,847 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 612, 'x': 655, 'y': 419})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 612, 'x': 655, 'y': 419})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:29:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 48%|███████████████████---------------------| 3499/7340 [123:08<135:11, 28.4 steps/min]\u001b[92m17:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:27,867 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:28,508 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:31,110 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m17:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:31,743 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:29:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3499/7340 [123:14<135:16, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:33,138 - agent.ComputerAgent - INFO - Computer: click({'x': 634, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 634, 'y': 529})\n",
+ "2025-08-11 17:29:33,791 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 82, 'y': 124}, {'x': 75, 'y': 124}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 82, 'y': 124}, {'x': 75, 'y': 124}]})\n",
+ "2025-08-11 17:29:34,450 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -517, 'scroll_x': 0, 'x': 46, 'y': 762})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -517, 'scroll_x': 0, 'x': 46, 'y': 762})\n",
+ "\u001b[92m17:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:35,110 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:35,790 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 673, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 673, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "\u001b[92m17:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3501/7340 [123:17<135:11, 28.4 steps/min]2025-08-11 17:29:36,477 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 333})\n",
+ "\u001b[92m17:29:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:37,144 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 329})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 329})\n",
+ "2025-08-11 17:29:37,807 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 332, 'y': 308}, {'x': 345, 'y': 308}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 332, 'y': 308}, {'x': 345, 'y': 308}]})\n",
+ " 48%|███████████████████---------------------| 3508/7340 [123:20<134:44, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:29:39,468 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m17:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:40,852 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:42,174 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 48%|███████████████████---------------------| 3508/7340 [123:23<134:47, 28.4 steps/min]2025-08-11 17:29:43,192 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:29:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3509/7340 [123:25<134:44, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:29:43,878 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:29:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:44,537 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:29:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:45,557 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3510/7340 [123:28<134:43, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:47,830 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:29:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:48,461 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3510/7340 [123:30<134:45, 28.4 steps/min]2025-08-11 17:29:49,532 - agent.ComputerAgent - INFO - Computer: click({'x': 728, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 728, 'y': 179})\n",
+ "2025-08-11 17:29:50,219 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3510/7340 [123:32<134:48, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:52,661 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:29:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:29:53,352 - agent.ComputerAgent - INFO - Computer: double_click({'x': 181, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 181, 'y': 105})\n",
+ "\u001b[92m17:29:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3511/7340 [123:35<134:47, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:54,670 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 541})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 541})\n",
+ "2025-08-11 17:29:55,330 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m17:29:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 48%|███████████████████---------------------| 3513/7340 [123:37<134:40, 28.4 steps/min]\u001b[92m17:29:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:57,015 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3514/7340 [123:38<134:37, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:29:57,710 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 400})\n",
+ "\u001b[92m17:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:29:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:29:59,392 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:29:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:30:00,039 - agent.ComputerAgent - INFO - Computer: click({'x': 1009, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1009, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 48%|███████████████████---------------------| 3515/7340 [123:41<134:36, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:00,720 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:30:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:01,758 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -657, 'scroll_x': 0, 'x': 988, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -657, 'scroll_x': 0, 'x': 988, 'y': 427})\n",
+ " 48%|███████████████████---------------------| 3517/7340 [123:43<134:29, 28.4 steps/min]2025-08-11 17:30:02,459 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:30:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b6bbc5bc-5598-4043-be1e-6ebf2da5f046/close \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3518/7340 [123:44<134:26, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:04,432 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 638, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 638, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3518/7340 [123:46<134:27, 28.4 steps/min]2025-08-11 17:30:05,089 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m17:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3519/7340 [123:47<134:24, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3519/7340 [123:48<134:25, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:07,300 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:08,649 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:30:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3520/7340 [123:50<134:23, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:30:09,313 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:30:10,392 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:30:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3520/7340 [123:52<134:26, 28.4 steps/min]\u001b[92m17:30:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]2025-08-11 17:30:12,609 - agent.ComputerAgent - INFO - Computer: type({'text': 'list find'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'list find'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it] 28.4 steps/min]2025-08-11 17:30:13,955 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3521/7340 [123:56<134:25, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]2025-08-11 17:30:15,411 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:30:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3521/7340 [123:57<134:26, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ " 48%|███████████████████---------------------| 3521/7340 [123:58<134:27, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:30:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:19,140 - agent.ComputerAgent - INFO - Computer: type({'text': 'for f in *.doc; do [ -f \"${f%.doc}.pdf\" ] || echo \"Missing: ${f%.doc}.pdf\"; done\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'for f in *.doc; do [ -f \"${f%.doc}.pdf\" ] || echo \"Missing: ${f%.doc}.pdf\"; done\\n'})\n",
+ " 48%|███████████████████---------------------| 3521/7340 [124:00<134:30, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:19,822 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:20,447 - agent.ComputerAgent - INFO - Computer: click({'x': 224, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 224, 'y': 564})\n",
+ "2025-08-11 17:30:21,107 - agent.ComputerAgent - INFO - Computer: click({'x': 190, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 190, 'y': 133})\n",
+ "2025-08-11 17:30:21,785 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 139})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3522/7340 [124:04<134:29, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:24,461 - agent.ComputerAgent - INFO - Computer: type({'text': '=(B2-C2)-(D2+F2+G2+H2)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=(B2-C2)-(D2+F2+G2+H2)'})\n",
+ "\u001b[92m17:30:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3526/7340 [124:06<134:15, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:25,823 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 53})\n",
+ "\u001b[92m17:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:26,494 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 123})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3527/7340 [124:08<134:12, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:27,854 - agent.ComputerAgent - INFO - Computer: double_click({'x': 197, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 197, 'y': 111})\n",
+ "\u001b[92m17:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:28,489 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 32})\n",
+ " 48%|███████████████████---------------------| 3529/7340 [124:10<134:05, 28.4 steps/min]2025-08-11 17:30:29,134 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:30:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f8984906-7392-4305-88fa-ae9a4808fa8d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:11<133:58, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:30:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:12<133:59, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:30:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:14<134:01, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:34,059 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:30:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it] 28.4 steps/min]2025-08-11 17:30:34,709 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:30:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:35,765 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:30:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.72s/it] 28.4 steps/min]2025-08-11 17:30:36,513 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:30:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:30:37,229 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]2025-08-11 17:30:38,038 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:30:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it] 28.4 steps/min]\n",
+ "2025-08-11 17:30:39,450 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:30:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:30:40,279 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:30:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:22<134:09, 28.4 steps/min]2025-08-11 17:30:40,950 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:30:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:23<134:10, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:30:42,692 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 346, 'y': 306}, {'x': 346, 'y': 306}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 346, 'y': 306}, {'x': 346, 'y': 306}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3531/7340 [124:25<134:12, 28.4 steps/min]\u001b[92m17:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:43,993 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 48%|███████████████████---------------------| 3532/7340 [124:26<134:09, 28.4 steps/min]\u001b[92m17:30:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:45,167 - agent.ComputerAgent - INFO - Computer: click({'x': 334, 'y': 341})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 334, 'y': 341})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/reset \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3533/7340 [124:27<134:06, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:30:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:47,037 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 291})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 291})\n",
+ " 48%|███████████████████---------------------| 3534/7340 [124:28<134:03, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:48,728 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:30:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3535/7340 [124:30<134:01, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3535/7340 [124:31<134:02, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:51,059 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:30:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3535/7340 [124:32<134:03, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:30:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:51,761 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 271})\n",
+ "2025-08-11 17:30:52,790 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:30:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:54,140 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 48%|███████████████████---------------------| 3535/7340 [124:35<134:06, 28.4 steps/min]2025-08-11 17:30:54,801 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:30:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:30:55,470 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/127b9298-d3cc-4b90-8567-e45146efa729/reset \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3537/7340 [124:37<133:59, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3537/7340 [124:38<134:00, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:30:57,986 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 48%|███████████████████---------------------| 3537/7340 [124:39<134:02, 28.4 steps/min]\u001b[92m17:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:30:58,654 - agent.ComputerAgent - INFO - Computer: click({'x': 711, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 711, 'y': 59})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:31:00,029 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3541/7340 [124:41<133:46, 28.4 steps/min]2025-08-11 17:31:00,659 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:31:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3543/7340 [124:43<133:39, 28.4 steps/min]\u001b[92m17:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4b18a76d-ef46-4622-9643-9ee6fe4900a3/close \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3543/7340 [124:44<133:40, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:31:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:04,014 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 257})\n",
+ "\u001b[92m17:31:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3543/7340 [124:45<133:42, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:04,672 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:31:04,673 - agent.ComputerAgent - INFO - Computer: click({'x': 567, 'y': 29})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 567, 'y': 29})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3544/7340 [124:47<133:40, 28.4 steps/min]\u001b[92m17:31:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:31:07,507 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]\u001b[92m17:31:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:31:08,161 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 48%|███████████████████---------------------| 3545/7340 [124:49<133:38, 28.4 steps/min]2025-08-11 17:31:08,822 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.72s/it]2025-08-11 17:31:09,963 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it] 28.4 steps/min]\n",
+ " 48%|███████████████████---------------------| 3545/7340 [124:53<133:42, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:31:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:13,623 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:31:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:31:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3545/7340 [124:55<133:44, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:14,300 - agent.ComputerAgent - INFO - Computer: double_click({'x': 916, 'y': 712})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 916, 'y': 712})\n",
+ "\u001b[92m17:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:14,961 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:31:15,586 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ " 48%|███████████████████---------------------| 3545/7340 [124:57<133:46, 28.4 steps/min]\u001b[92m17:31:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:16,270 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 123})\n",
+ "2025-08-11 17:31:16,909 - agent.ComputerAgent - INFO - Computer: click({'x': 336, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 336, 'y': 339})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3547/7340 [124:59<133:39, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:19,522 - agent.ComputerAgent - INFO - Computer: click({'x': 331, 'y': 551})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 331, 'y': 551})\n",
+ " 48%|███████████████████---------------------| 3549/7340 [125:01<133:32, 28.4 steps/min]\u001b[92m17:31:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:20,185 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:31:20,185 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:22,175 - agent.ComputerAgent - INFO - Computer: click({'x': 962, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 962, 'y': 760})\n",
+ " 48%|███████████████████---------------------| 3550/7340 [125:03<133:31, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:31:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:23,520 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 627, 'scroll_x': 0, 'x': 625, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 627, 'scroll_x': 0, 'x': 625, 'y': 277})\n",
+ "\u001b[92m17:31:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 48%|███████████████████---------------------| 3552/7340 [125:05<133:23, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:24,173 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 91, 'y': 442})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 91, 'y': 442})\n",
+ "\u001b[92m17:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:24,860 - agent.ComputerAgent - INFO - Computer: click({'x': 270, 'y': 322})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 270, 'y': 322})\n",
+ " 48%|███████████████████---------------------| 3553/7340 [125:06<133:20, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 48%|███████████████████---------------------| 3555/7340 [125:07<133:13, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:08<132:52, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1909e6f5-b395-4e1d-b1f7-b06406f8731b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:31:28,194 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:31:29,924 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:11<132:55, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:31:31,242 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:31:31,894 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:13<132:57, 28.4 steps/min]2025-08-11 17:31:32,563 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:31:33,485 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:15<132:59, 28.4 steps/min]2025-08-11 17:31:34,143 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:31:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:31:35,074 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:16<133:01, 28.4 steps/min]2025-08-11 17:31:35,770 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:31:36,669 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:31:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 28.4 steps/min]\n",
+ "2025-08-11 17:31:37,394 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:31:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:19<133:04, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3560/7340 [125:20<133:05, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:40,447 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo \"DOC files:\" $(ls -1 *.doc | wc -l); echo \"Matching PDFs:\" $(for f in *.doc; do [ -f \"${f%.doc}.pdf\" ] && echo 1; done | wc -l)\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo \"DOC files:\" $(ls -1 *.doc | wc -l); echo \"Matching PDFs:\" $(for f in *.doc; do [ -f \"${f%.doc}.pdf\" ] && echo 1; done | wc -l)\\n'})\n",
+ "\u001b[92m17:31:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/57944bbf-74a1-4e6d-9401-f7b0144460f7/close \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3567/7340 [125:22<132:36, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:41,134 - agent.ComputerAgent - INFO - Computer: click({'x': 634, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 634, 'y': 529})\n",
+ " 49%|███████████████████---------------------| 3568/7340 [125:23<132:33, 28.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:26<132:32, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:27<132:33, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]\u001b[92m17:31:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:29<132:35, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]\u001b[92m17:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:32<132:38, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:31:51,123 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:34<132:40, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:53,461 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:54,120 - agent.ComputerAgent - INFO - Computer: click({'x': 335, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 335, 'y': 339})\n",
+ " 49%|███████████████████---------------------| 3569/7340 [125:35<132:42, 28.4 steps/min]\u001b[92m17:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:54,773 - agent.ComputerAgent - INFO - Computer: click({'x': 384, 'y': 349})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 384, 'y': 349})\n",
+ "\u001b[92m17:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:55,420 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 173})\n",
+ " 49%|███████████████████---------------------| 3570/7340 [125:37<132:39, 28.4 steps/min]\u001b[92m17:31:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:31:56,085 - agent.ComputerAgent - INFO - Computer: click({'x': 147, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 147, 'y': 53})\n",
+ "\u001b[92m17:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:56,746 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 386})\n",
+ "2025-08-11 17:31:57,392 - agent.ComputerAgent - INFO - Computer: click({'x': 997, 'y': 29})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 997, 'y': 29})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3572/7340 [125:39<132:33, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:58,672 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 49%|███████████████████---------------------| 3575/7340 [125:40<132:21, 28.4 steps/min]\u001b[92m17:31:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:31:59,838 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 630, 'y': 701})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 630, 'y': 701})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/reset \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3577/7340 [125:43<132:16, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3577/7340 [125:44<132:17, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:03,712 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:32:04,430 - agent.ComputerAgent - INFO - Computer: click({'x': 82, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 82, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3577/7340 [125:46<132:18, 28.4 steps/min]2025-08-11 17:32:05,105 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:05,763 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:06,442 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:32:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:07,107 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:32:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3578/7340 [125:50<132:18, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:09,794 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:32:09,795 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:32:10,452 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:32:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3578/7340 [125:52<132:20, 28.4 steps/min]\u001b[92m17:32:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:32:11,119 - agent.ComputerAgent - INFO - Computer: click({'x': 825, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 825, 'y': 232})\n",
+ "2025-08-11 17:32:11,821 - agent.ComputerAgent - INFO - Computer: click({'x': 310, 'y': 220})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 310, 'y': 220})\n",
+ "2025-08-11 17:32:12,449 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3579/7340 [125:54<132:18, 28.4 steps/min]2025-08-11 17:32:13,116 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:13,768 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3581/7340 [125:55<132:11, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3581/7340 [125:59<132:15, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:32:20,196 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3581/7340 [126:01<132:17, 28.4 steps/min]2025-08-11 17:32:20,833 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:32:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:21,850 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:32:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3581/7340 [126:03<132:19, 28.4 steps/min]2025-08-11 17:32:22,508 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3581/7340 [126:04<132:21, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:23,850 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:32:31,492 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:32:32,860 - agent.ComputerAgent - INFO - Agent: All .doc files in the current directory have been converted to PDF.\n",
+ "\n",
+ "Command executed:\n",
+ "soffice --headless --convert-to pdf --outdir . *.doc\n",
+ "\n",
+ "Result:\n",
+ "- 12 .doc files detected\n",
+ "- 12 corresponding PDFs created in the same directory (Desktop) with matching base filenames.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: All .doc files in the current directory have been converted to PDF.\n",
+ "\n",
+ "Command executed:\n",
+ "soffice --headless --convert-to pdf --outdir . *.doc\n",
+ "\n",
+ "Result:\n",
+ "- 12 .doc files detected\n",
+ "- 12 corresponding PDFs created in the same directory (Desktop) with matching base filenames.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 17:32:33,503 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 458\n",
+ " - prompt_tokens: 8792\n",
+ " - total_tokens: 9250\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0156\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 458\n",
+ " - prompt_tokens: 8792\n",
+ " - total_tokens: 9250\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0156\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3582/7340 [126:16<132:28, 28.4 steps/min]\u001b[92m17:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 49%|███████████████████---------------------| 3583/7340 [126:19<132:27, 28.4 steps/min]\u001b[92m17:32:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:38,245 - agent.ComputerAgent - INFO - Computer: double_click({'x': 371, 'y': 349})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 371, 'y': 349})\n",
+ "2025-08-11 17:32:38,925 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 148})\n",
+ "2025-08-11 17:32:39,563 - agent.ComputerAgent - INFO - Computer: click({'x': 334, 'y': 341})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 334, 'y': 341})\n",
+ "\u001b[92m17:32:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:32:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3583/7340 [126:21<132:29, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:40,250 - agent.ComputerAgent - INFO - Computer: click({'x': 345, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 345, 'y': 741})\n",
+ "2025-08-11 17:32:40,888 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 181})\n",
+ "2025-08-11 17:32:41,581 - agent.ComputerAgent - INFO - Computer: double_click({'x': 744, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 744, 'y': 178})\n",
+ "\u001b[92m17:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3586/7340 [126:23<132:18, 28.4 steps/min]2025-08-11 17:32:42,238 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 623, 'y': 290})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 623, 'y': 290})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3592/7340 [126:24<131:53, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/dc026dd3-8d59-43e0-a475-ecef72f1db12/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:25<131:50, 28.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:26<131:51, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:27<131:52, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:28<131:53, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:32:48,429 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it] 28.4 steps/min]2025-08-11 17:32:49,201 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:32:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:32:49,883 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:32:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 17:32:50,743 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:32:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 17:32:51,364 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:33<131:58, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:52,033 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:32:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:53,081 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:32:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:34<132:00, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:32:54,112 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:35<132:01, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:37<132:02, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:32:56,857 - agent.ComputerAgent - INFO - Computer: click({'x': 923, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 923, 'y': 761})\n",
+ "\u001b[92m17:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a5f69ad6-9361-4670-b101-61761113341c/reset \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3593/7340 [126:38<132:04, 28.4 steps/min]2025-08-11 17:32:57,548 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 123})\n",
+ "2025-08-11 17:32:58,196 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:32:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3594/7340 [126:39<132:01, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3595/7340 [126:41<131:58, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:00,549 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 532})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3595/7340 [126:42<131:59, 28.4 steps/min]2025-08-11 17:33:01,162 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:01,820 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:44<131:57, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:45<131:58, 28.4 steps/min]2025-08-11 17:33:04,531 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:46<131:59, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:33:05,740 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:33:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:47<132:00, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:48<132:01, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:51<132:04, 28.3 steps/min]\u001b[92m17:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:10,832 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:33:10,833 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:33:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:12,125 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 53})\n",
+ "\u001b[92m17:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3596/7340 [126:53<132:07, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:12,809 - agent.ComputerAgent - INFO - Computer: double_click({'x': 422, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 422, 'y': 339})\n",
+ "2025-08-11 17:33:13,484 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 232, 'y': 349}, {'x': 531, 'y': 350}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 232, 'y': 349}, {'x': 531, 'y': 350}]})\n",
+ "2025-08-11 17:33:14,127 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 603, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 603, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "2025-08-11 17:33:14,769 - agent.ComputerAgent - INFO - Computer: double_click({'x': 327, 'y': 550})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 327, 'y': 550})\n",
+ "\u001b[92m17:33:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3598/7340 [126:58<132:03, 28.3 steps/min]2025-08-11 17:33:17,393 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:33:17,395 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3602/7340 [126:59<131:47, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:18,699 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 651, 'scroll_x': 0, 'x': 687, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 651, 'scroll_x': 0, 'x': 687, 'y': 318})\n",
+ "\u001b[92m17:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:19,349 - agent.ComputerAgent - INFO - Computer: click({'x': 748, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 748, 'y': 178})\n",
+ "\u001b[92m17:33:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3603/7340 [127:01<131:44, 28.4 steps/min]2025-08-11 17:33:19,991 - agent.ComputerAgent - INFO - Computer: click({'x': 257, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 257, 'y': 355})\n",
+ "2025-08-11 17:33:20,635 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:33:20,635 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 333})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 333})\n",
+ " 49%|███████████████████---------------------| 3605/7340 [127:02<131:37, 28.4 steps/min]2025-08-11 17:33:21,291 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:33:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3607/7340 [127:03<131:29, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 49%|███████████████████---------------------| 3607/7340 [127:04<131:30, 28.4 steps/min]\u001b[92m17:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:23,614 - agent.ComputerAgent - INFO - Computer: click({'x': 873, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 873, 'y': 95})\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:06<131:28, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:33:25,823 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:33:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:07<131:29, 28.4 steps/min]2025-08-11 17:33:26,877 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:33:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:09<131:31, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:28,202 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:28,832 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:10<131:32, 28.4 steps/min]\u001b[92m17:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:29,469 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:33:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:30,112 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:11<131:34, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:30,779 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:31,470 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:13<131:35, 28.4 steps/min]2025-08-11 17:33:32,131 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 398, 'y': 741}, {'x': 105, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 398, 'y': 741}, {'x': 105, 'y': 741}]})\n",
+ "2025-08-11 17:33:32,811 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:33:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:33,461 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:33:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:33:34,106 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3608/7340 [127:16<131:39, 28.3 steps/min]\u001b[92m17:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:35,965 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3609/7340 [127:17<131:35, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:37,162 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 85, 'y': 125}, {'x': 86, 'y': 125}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 85, 'y': 125}, {'x': 86, 'y': 125}]})\n",
+ " 49%|███████████████████---------------------| 3609/7340 [127:18<131:37, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3610/7340 [127:19<131:33, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3610/7340 [127:20<131:34, 28.3 steps/min]\u001b[92m17:33:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:39,454 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 270})\n",
+ " 49%|███████████████████---------------------| 3611/7340 [127:21<131:31, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3611/7340 [127:23<131:32, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:41,893 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:42,562 - agent.ComputerAgent - INFO - Computer: click({'x': 891, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 891, 'y': 202})\n",
+ "\u001b[92m17:33:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/45b21d3b-9328-4819-bba2-f954432ba73e/reset \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3611/7340 [127:24<131:34, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:43,268 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 272})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 272})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3612/7340 [127:25<131:31, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:33:44,572 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:33:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:45,228 - agent.ComputerAgent - INFO - Computer: click({'x': 442, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 442, 'y': 389})\n",
+ " 49%|███████████████████---------------------| 3613/7340 [127:26<131:28, 28.3 steps/min]2025-08-11 17:33:45,853 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:33:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:47,212 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:33:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/invoke \"HTTP/1.1 500 Internal Server Error\"\n",
+ " 49%|███████████████████---------------------| 3614/7340 [127:28<131:25, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:33:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:48,365 - agent.ComputerAgent - INFO - Computer: click({'id_path': '', 'x': 795, 'y': 109})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'id_path': '', 'x': 795, 'y': 109})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:33:49,696 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/792a6953-2092-47e4-a8a8-57a4af4e3be1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3624/7340 [127:32<130:46, 28.4 steps/min]\u001b[92m17:33:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:33:50,980 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:33:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3624/7340 [127:34<130:49, 28.4 steps/min]\u001b[92m17:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:33:53,621 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 90, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 90, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.70s/it]\u001b[92m17:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:33:55,813 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it] 28.4 steps/min]2025-08-11 17:33:56,692 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:33:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:33:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 17:33:58,268 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:33:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 49%|███████████████████---------------------| 3625/7340 [127:40<130:50, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m17:33:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 49%|███████████████████---------------------| 3625/7340 [127:41<130:51, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:00,853 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m17:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 49%|███████████████████---------------------| 3625/7340 [127:43<130:53, 28.4 steps/min]\u001b[92m17:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:02,525 - agent.ComputerAgent - INFO - Computer: click({'x': 124, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 124, 'y': 89})\n",
+ " 49%|███████████████████---------------------| 3625/7340 [127:44<130:54, 28.4 steps/min]\u001b[92m17:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:03,223 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 203})\n",
+ "\u001b[92m17:34:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:03,902 - agent.ComputerAgent - INFO - Computer: click({'x': 315, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 315, 'y': 532})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:04,524 - agent.ComputerAgent - INFO - Computer: double_click({'x': 1008, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 1008, 'y': 164})\n",
+ "\u001b[92m17:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3627/7340 [127:46<130:48, 28.4 steps/min]2025-08-11 17:34:05,199 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 77})\n",
+ "2025-08-11 17:34:05,903 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:34:05,903 - agent.ComputerAgent - INFO - Computer: move({'x': 1018, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 1018, 'y': 10})\n",
+ "\u001b[92m17:34:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3630/7340 [127:48<130:37, 28.4 steps/min]2025-08-11 17:34:07,290 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 748, 'y': 182}, {'x': 745, 'y': 281}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 748, 'y': 182}, {'x': 745, 'y': 281}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:34:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3632/7340 [127:50<130:30, 28.4 steps/min]\u001b[92m17:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:09,239 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:09,926 - agent.ComputerAgent - INFO - Computer: double_click({'x': 432, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 432, 'y': 389})\n",
+ "\u001b[92m17:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 49%|███████████████████---------------------| 3633/7340 [127:51<130:27, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:10,556 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 658, 'scroll_x': 0, 'x': 630, 'y': 705})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 658, 'scroll_x': 0, 'x': 630, 'y': 705})\n",
+ "\u001b[92m17:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:11,906 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m17:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:12,564 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 346, 'y': 741}, {'x': 85, 'y': 742}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 346, 'y': 741}, {'x': 85, 'y': 742}]})\n",
+ " 50%|███████████████████---------------------| 3634/7340 [127:54<130:26, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:13,907 - agent.ComputerAgent - INFO - Computer: click({'x': 826, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 826, 'y': 202})\n",
+ "\u001b[92m17:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 50%|███████████████████---------------------| 3637/7340 [127:55<130:14, 28.4 steps/min]2025-08-11 17:34:14,578 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:15,233 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:15,913 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3638/7340 [127:57<130:12, 28.4 steps/min]2025-08-11 17:34:16,564 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:34:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:17,244 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3639/7340 [127:59<130:09, 28.4 steps/min]2025-08-11 17:34:17,894 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:18,937 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:34:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3639/7340 [128:01<130:12, 28.4 steps/min]\u001b[92m17:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:20,896 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m17:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3639/7340 [128:03<130:14, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:22,184 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:34:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:22,863 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 90, 'y': 244})\n",
+ "\u001b[92m17:34:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3639/7340 [128:04<130:15, 28.4 steps/min]2025-08-11 17:34:23,569 - agent.ComputerAgent - INFO - Computer: click({'x': 405, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 405, 'y': 214})\n",
+ "\u001b[92m17:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:24,961 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:34:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:25,626 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:26,658 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:34:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:34:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:27,338 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:34:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3641/7340 [128:09<130:11, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:28,046 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 75, 'y': 124}, {'x': 86, 'y': 124}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 75, 'y': 124}, {'x': 86, 'y': 124}]})\n",
+ "2025-08-11 17:34:29,092 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:34:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3642/7340 [128:10<130:09, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:29,786 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m17:34:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:31,455 - agent.ComputerAgent - INFO - Computer: type({'text': 'Note'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Note'})\n",
+ " 50%|███████████████████---------------------| 3643/7340 [128:13<130:07, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:33,799 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 50%|███████████████████---------------------| 3644/7340 [128:15<130:05, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:34,468 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:34:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:35,148 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3645/7340 [128:17<130:03, 28.4 steps/min]\u001b[92m17:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:36,838 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m17:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:37,514 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3645/7340 [128:19<130:04, 28.4 steps/min]\u001b[92m17:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:38,170 - agent.ComputerAgent - INFO - Computer: click({'x': 540, 'y': 12})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 540, 'y': 12})\n",
+ "2025-08-11 17:34:38,827 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:34:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:40,136 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3646/7340 [128:22<130:03, 28.4 steps/min]\u001b[92m17:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3648/7340 [128:23<129:56, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:42,752 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 294})\n",
+ "\u001b[92m17:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:43,416 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 163})\n",
+ "\u001b[92m17:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3648/7340 [128:25<129:58, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:44,078 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:44,744 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 223, 'y': 345}, {'x': 294, 'y': 373}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 223, 'y': 345}, {'x': 294, 'y': 373}]})\n",
+ " 50%|███████████████████---------------------| 3650/7340 [128:26<129:50, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3651/7340 [128:27<129:47, 28.4 steps/min]2025-08-11 17:34:46,619 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m17:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3651/7340 [128:29<129:49, 28.4 steps/min]\u001b[92m17:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:47,978 - agent.ComputerAgent - INFO - Computer: click({'x': 719, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 719, 'y': 294})\n",
+ "\u001b[92m17:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:48,651 - agent.ComputerAgent - INFO - Computer: click({'x': 882, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 882, 'y': 336})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 50%|███████████████████---------------------| 3652/7340 [128:30<129:46, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:34:49,808 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3654/7340 [128:31<129:39, 28.4 steps/min]2025-08-11 17:34:50,518 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m17:34:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:51,794 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:34:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3654/7340 [128:34<129:41, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:53,834 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:55,133 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:34:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3655/7340 [128:38<129:41, 28.4 steps/min]\u001b[92m17:34:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:34:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:34:57,169 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:57,819 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:58,549 - agent.ComputerAgent - INFO - Computer: click({'x': 266, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 266, 'y': 149})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:34:59,189 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:34:59,833 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 86, 'y': 301})\n",
+ "\u001b[92m17:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3655/7340 [128:41<129:45, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:00,518 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 213})\n",
+ "2025-08-11 17:35:01,539 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:35:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:02,218 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:02,861 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3657/7340 [128:44<129:39, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:04,383 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:35:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3658/7340 [128:46<129:36, 28.4 steps/min]\u001b[92m17:35:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:05,056 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 351, 'y': 742}, {'x': 48, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 351, 'y': 742}, {'x': 48, 'y': 741}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:05,718 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m17:35:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3658/7340 [128:47<129:38, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3659/7340 [128:48<129:34, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3660/7340 [128:49<129:31, 28.4 steps/min]2025-08-11 17:35:08,404 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3660/7340 [128:50<129:32, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:09,095 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m17:35:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:10,175 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|███████████████████---------------------| 3660/7340 [128:52<129:34, 28.4 steps/min]2025-08-11 17:35:10,819 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3661/7340 [128:53<129:31, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:12,659 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:13,727 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m17:35:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:14,418 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 121})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 121})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/reset \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3661/7340 [128:56<129:34, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/789598ee-3628-40d3-8b82-0c53827a32c1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3662/7340 [128:57<129:31, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:16,871 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 188})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:18,162 - agent.ComputerAgent - INFO - Agent: The extension from Desktop (helloExtension) has been loaded into Chrome via Developer mode and is now installed. Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The extension from Desktop (helloExtension) has been loaded into Chrome via Developer mode and is now installed. Task completed\n",
+ "2025-08-11 17:35:18,829 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 352\n",
+ " - prompt_tokens: 11459\n",
+ " - total_tokens: 11811\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 352\n",
+ " - prompt_tokens: 11459\n",
+ " - total_tokens: 11811\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3663/7340 [129:01<129:31, 28.4 steps/min]\u001b[92m17:35:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:20,756 - agent.ComputerAgent - INFO - Computer: click({'x': 416, 'y': 320})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 416, 'y': 320})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:35:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:22,705 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 50%|███████████████████---------------------| 3665/7340 [129:05<129:26, 28.4 steps/min]\u001b[92m17:35:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:24,029 - agent.ComputerAgent - INFO - Computer: double_click({'x': 221, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 221, 'y': 345})\n",
+ "\u001b[92m17:35:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:35:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:25,310 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:26,002 - agent.ComputerAgent - INFO - Computer: click({'x': 913, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 913, 'y': 335})\n",
+ "2025-08-11 17:35:26,675 - agent.ComputerAgent - INFO - Computer: click({'x': 557, 'y': 491})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 557, 'y': 491})\n",
+ "\u001b[92m17:35:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3666/7340 [129:08<129:25, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:27,362 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 661, 'scroll_x': 0, 'x': 627, 'y': 705})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 661, 'scroll_x': 0, 'x': 627, 'y': 705})\n",
+ "\u001b[92m17:35:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:27,998 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|███████████████████---------------------| 3669/7340 [129:10<129:14, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|████████████████████--------------------| 3671/7340 [129:11<129:07, 28.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:30,651 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 634, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 634, 'scroll_x': 0, 'x': 86, 'y': 245})\n",
+ "\u001b[92m17:35:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|████████████████████--------------------| 3671/7340 [129:13<129:08, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:31,961 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 173})\n",
+ "\u001b[92m17:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:32,581 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:33,302 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 310})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 310})\n",
+ "\u001b[92m17:35:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:35:33,894 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m17:35:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3672/7340 [129:15<129:07, 28.4 steps/min]2025-08-11 17:35:34,578 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 359, 'y': 742}, {'x': 115, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 359, 'y': 742}, {'x': 115, 'y': 741}]})\n",
+ "2025-08-11 17:35:35,585 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:35:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3674/7340 [129:17<129:00, 28.4 steps/min]2025-08-11 17:35:36,230 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:36,911 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3676/7340 [129:18<128:53, 28.4 steps/min]2025-08-11 17:35:37,569 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:38,237 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:35:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3681/7340 [129:20<128:33, 28.5 steps/min]2025-08-11 17:35:38,886 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:35:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:39,880 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4b4d291-1fca-4038-8670-448014a55182/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3681/7340 [129:21<128:35, 28.5 steps/min]2025-08-11 17:35:40,540 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m17:35:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3681/7340 [129:23<128:36, 28.5 steps/min]2025-08-11 17:35:41,880 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:35:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:42,521 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:35:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3681/7340 [129:25<128:39, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:35:44,529 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:35:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:35:45,185 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:35:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3682/7340 [129:26<128:36, 28.4 steps/min]2025-08-11 17:35:45,888 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:35:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3682/7340 [129:27<128:37, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:47,569 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m17:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3682/7340 [129:29<128:38, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 50%|████████████████████--------------------| 3682/7340 [129:31<128:40, 28.4 steps/min]\u001b[92m17:35:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<128:37, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:51,601 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m17:35:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3683/7340 [129:33<128:38, 28.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3683/7340 [129:34<128:39, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:35:53,843 - agent.ComputerAgent - INFO - Agent: Do Not Disturb has been turned on via the top bar (date/time menu). You won’t receive notification pop-ups.\n",
+ "\n",
+ "To turn it off later, click the date/time in the top bar and toggle “Do Not Disturb” off.\n",
+ "\n",
+ "Task completed\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:agent.ComputerAgent:Agent: Do Not Disturb has been turned on via the top bar (date/time menu). You won’t receive notification pop-ups.\n",
+ "\n",
+ "To turn it off later, click the date/time in the top bar and toggle “Do Not Disturb” off.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 17:35:54,519 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 316\n",
+ " - prompt_tokens: 4009\n",
+ " - total_tokens: 4325\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0082\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 316\n",
+ " - prompt_tokens: 4009\n",
+ " - total_tokens: 4325\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0082\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 50%|████████████████████--------------------| 3685/7340 [129:37<128:33, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 17:35:56,175 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m17:35:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 50%|████████████████████--------------------| 3685/7340 [129:38<128:34, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 50%|████████████████████--------------------| 3686/7340 [129:39<128:32, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:59,123 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'backspace'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'backspace'})\n",
+ "\u001b[92m17:35:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/45b21d3b-9328-4819-bba2-f954432ba73e/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:35:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|████████████████████--------------------| 3702/7340 [129:40<127:26, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:35:59,750 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m17:35:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:36:00,419 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 202})\n",
+ "2025-08-11 17:36:01,765 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:36:01,765 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ "\u001b[92m17:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|████████████████████--------------------| 3703/7340 [129:44<127:25, 28.5 steps/min]2025-08-11 17:36:03,058 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:36:03,059 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 576})\n",
+ "2025-08-11 17:36:03,713 - agent.ComputerAgent - INFO - Computer: click({'x': 885, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 885, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 50%|████████████████████--------------------| 3706/7340 [129:48<127:16, 28.6 steps/min]\u001b[92m17:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3709/7340 [129:49<127:05, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b28701c2-0fa4-4b07-bace-735fd2133893/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:36:08,691 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m17:36:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3709/7340 [129:52<127:08, 28.6 steps/min]\u001b[92m17:36:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:36:11,651 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:36:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:36:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:36:12,334 - agent.ComputerAgent - INFO - Computer: click({'x': 270, 'y': 329})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 270, 'y': 329})\n",
+ "\u001b[92m17:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3709/7340 [129:54<127:10, 28.6 steps/min]2025-08-11 17:36:13,001 - agent.ComputerAgent - INFO - Computer: click({'x': 105, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 105, 'y': 163})\n",
+ "2025-08-11 17:36:13,935 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]\u001b[92m17:36:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3711/7340 [129:56<127:04, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:36:15,541 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:36:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:36:16,230 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:36:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3712/7340 [129:58<127:01, 28.6 steps/min]2025-08-11 17:36:16,910 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:36:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3712/7340 [129:59<127:02, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:36:17,571 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m17:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "\u001b[92m17:36:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:36:18,632 - agent.ComputerAgent - INFO - Computer: click({'x': 552, 'y': 545})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 552, 'y': 545})\n",
+ " 51%|████████████████████--------------------| 3713/7340 [130:01<127:00, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/beafa529-961e-4382-b811-5d442e689644/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:02<126:57, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:36:21,620 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:36:22,267 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:36:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:04<126:59, 28.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:05<127:00, 28.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:36:24,432 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:36:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:06<127:01, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:36:25,110 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:10<127:05, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:12<127:07, 28.5 steps/min]\u001b[92m17:36:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:16<127:11, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:17<127:12, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:22<127:17, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:24<127:18, 28.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:25<127:20, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:36:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:36:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:26<127:21, 28.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:33<127:28, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:34<127:29, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:36:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:36<127:31, 28.4 steps/min]\u001b[92m17:36:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:41<127:36, 28.4 steps/min]\u001b[92m17:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:47<127:41, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:48<127:42, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:49<127:43, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:50<127:44, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:52<127:46, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/448620aa-cee2-4394-81f2-d8efa1937c36/reset \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3714/7340 [130:54<127:48, 28.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:37:13,743 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:37:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3714/7340 [131:02<127:55, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:37:21,696 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:37:21,698 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 51%|████████████████████--------------------| 3715/7340 [131:04<127:53, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:37:23,902 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:37:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84c79e59-0828-4a11-a35b-c4f6d5d36ed1/close \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3715/7340 [131:05<127:55, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3715/7340 [131:06<127:56, 28.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3715/7340 [131:14<128:03, 28.3 steps/min]\u001b[92m17:37:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:37:34,432 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 91, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'scroll_x': 0, 'x': 91, 'y': 732})\n",
+ " 51%|████████████████████--------------------| 3715/7340 [131:16<128:05, 28.3 steps/min]\u001b[92m17:37:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:37:35,617 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 103})\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:18<127:59, 28.3 steps/min]\u001b[92m17:37:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:37:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:19<128:00, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]\u001b[92m17:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:20<128:01, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it] 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:22<128:03, 28.3 steps/min]2025-08-11 17:37:42,047 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]\u001b[92m17:37:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 28.3 steps/min]\n",
+ "2025-08-11 17:37:42,773 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:37:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:25<128:06, 28.3 steps/min]\u001b[92m17:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:37:45,011 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 656, 'scroll_x': 0, 'x': 687, 'y': 320})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 656, 'scroll_x': 0, 'x': 687, 'y': 320})\n",
+ " 51%|████████████████████--------------------| 3717/7340 [131:27<128:07, 28.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:37:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:37:46,690 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 204})\n",
+ " 51%|████████████████████--------------------| 3719/7340 [131:32<128:04, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:37:51,444 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:37:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3719/7340 [131:33<128:05, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:37:52,612 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:37:53,263 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 757, 'y': 182}, {'x': 745, 'y': 254}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 757, 'y': 182}, {'x': 745, 'y': 254}]})\n",
+ " 51%|████████████████████--------------------| 3720/7340 [131:36<128:03, 28.3 steps/min]\u001b[92m17:37:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:37:55,448 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 944, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 944, 'y': 760})\n",
+ " 51%|████████████████████--------------------| 3721/7340 [131:39<128:02, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/reset \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3721/7340 [131:40<128:03, 28.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:37:59,674 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3721/7340 [131:41<128:04, 28.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:00,853 - agent.ComputerAgent - INFO - Computer: move({'x': 17, 'y': 17})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 17, 'y': 17})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:38:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3721/7340 [131:43<128:06, 28.2 steps/min]\u001b[92m17:38:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:02,277 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:38:02,278 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "2025-08-11 17:38:02,918 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:38:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:04,383 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3722/7340 [131:46<128:05, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3723/7340 [131:48<128:02, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3723/7340 [131:50<128:04, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:10,513 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'g'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'g'})\n",
+ " 51%|████████████████████--------------------| 3723/7340 [131:52<128:06, 28.2 steps/min]\u001b[92m17:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:11,158 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "2025-08-11 17:38:11,836 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3724/7340 [131:54<128:04, 28.2 steps/min]\u001b[92m17:38:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:14,463 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:38:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3726/7340 [131:56<127:58, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:15,646 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3726/7340 [131:57<127:59, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3726/7340 [132:00<128:02, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:20,017 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 91, 'y': 710})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 91, 'y': 710})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3726/7340 [132:01<128:03, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3727/7340 [132:03<128:01, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:38:23,229 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:38:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3727/7340 [132:05<128:02, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:38:24,356 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3727/7340 [132:06<128:03, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:25,563 - agent.ComputerAgent - INFO - Computer: click({'x': 284, 'y': 354})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 284, 'y': 354})\n",
+ " 51%|████████████████████--------------------| 3728/7340 [132:08<128:01, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3729/7340 [132:09<127:58, 28.2 steps/min]2025-08-11 17:38:28,244 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:38:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fde8bca8-8a90-4fed-b46f-c24829445665/close \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3729/7340 [132:10<127:59, 28.2 steps/min]\u001b[92m17:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:29,579 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:38:30,927 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3729/7340 [132:12<128:01, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:38:32,054 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:38:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3730/7340 [132:13<127:58, 28.2 steps/min]2025-08-11 17:38:34,224 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:38:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3730/7340 [132:15<128:00, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:36,877 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 205, 'y': 340}, {'x': 232, 'y': 340}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 205, 'y': 340}, {'x': 232, 'y': 340}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3730/7340 [132:19<128:03, 28.2 steps/min]\u001b[92m17:38:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:38,174 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:38:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<128:00, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3731/7340 [132:24<128:04, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 28.2 steps/min]\n",
+ "2025-08-11 17:38:44,305 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:38:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:38:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3731/7340 [132:27<128:07, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:47,242 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 51%|████████████████████--------------------| 3731/7340 [132:28<128:09, 28.2 steps/min]\u001b[92m17:38:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:47,905 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:38:47,906 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 548})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 548})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3732/7340 [132:30<128:06, 28.2 steps/min]\u001b[92m17:38:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:49,228 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 241})\n",
+ "2025-08-11 17:38:49,895 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3733/7340 [132:31<128:03, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:50,584 - agent.ComputerAgent - INFO - Computer: click({'x': 700, 'y': 599})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 700, 'y': 599})\n",
+ " 51%|████████████████████--------------------| 3734/7340 [132:32<128:00, 28.2 steps/min]\u001b[92m17:38:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:51,737 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 173})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3735/7340 [132:34<127:57, 28.2 steps/min]\u001b[92m17:38:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:38:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:53,062 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 144})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 144})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:38:53,712 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 51%|████████████████████--------------------| 3736/7340 [132:35<127:54, 28.2 steps/min]\u001b[92m17:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:54,401 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 548})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 548})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3737/7340 [132:36<127:51, 28.2 steps/min]\u001b[92m17:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:55,703 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 93, 'y': 134}, {'x': 93, 'y': 136}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 93, 'y': 134}, {'x': 93, 'y': 136}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:56,742 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:38:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:38:57,415 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:38:58,745 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3738/7340 [132:40<127:50, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:38:59,371 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 986, 'y': 760}, {'x': 989, 'y': 760}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 986, 'y': 760}, {'x': 989, 'y': 760}]})\n",
+ "\u001b[92m17:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:00,700 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:01,337 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 438, 'y': 742}, {'x': 144, 'y': 741}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 438, 'y': 742}, {'x': 144, 'y': 741}]})\n",
+ "\u001b[92m17:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:39:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:02,695 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 51%|████████████████████--------------------| 3739/7340 [132:44<127:50, 28.2 steps/min]\u001b[92m17:39:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:03,350 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 91, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 662, 'scroll_x': 0, 'x': 91, 'y': 741})\n",
+ "2025-08-11 17:39:04,008 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 104})\n",
+ "\u001b[92m17:39:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:04,646 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:39:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:05,296 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:39:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:39:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:06,001 - agent.ComputerAgent - INFO - Computer: click({'x': 132, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 132, 'y': 181})\n",
+ " 51%|████████████████████--------------------| 3741/7340 [132:47<127:45, 28.2 steps/min]2025-08-11 17:39:06,665 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:39:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:07,339 - agent.ComputerAgent - INFO - Computer: double_click({'x': 412, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 412, 'y': 339})\n",
+ "\u001b[92m17:39:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3744/7340 [132:49<127:34, 28.2 steps/min]2025-08-11 17:39:07,959 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:39:07,959 - agent.ComputerAgent - INFO - Computer: click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 960, 'y': 713})\n",
+ "2025-08-11 17:39:08,595 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:39:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3745/7340 [132:50<127:31, 28.2 steps/min]2025-08-11 17:39:10,116 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:39:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3746/7340 [132:51<127:28, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3746/7340 [132:54<127:31, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:13,807 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:39:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:14,495 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:39:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:15,166 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:39:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3746/7340 [132:56<127:33, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:16,515 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:17,185 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:39:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3746/7340 [132:58<127:35, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:18,628 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:39:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3746/7340 [133:01<127:37, 28.2 steps/min]\u001b[92m17:39:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:20,635 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 238})\n",
+ " 51%|████████████████████--------------------| 3746/7340 [133:02<127:38, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:22,088 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3747/7340 [133:05<127:37, 28.2 steps/min]2025-08-11 17:39:24,133 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 427})\n",
+ "\u001b[92m17:39:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:25,159 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 124})\n",
+ " 51%|████████████████████--------------------| 3747/7340 [133:06<127:38, 28.1 steps/min]\u001b[92m17:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:25,840 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3749/7340 [133:08<127:31, 28.2 steps/min]\u001b[92m17:39:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:27,179 - agent.ComputerAgent - INFO - Computer: click({'x': 284, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 284, 'y': 355})\n",
+ " 51%|████████████████████--------------------| 3750/7340 [133:09<127:28, 28.2 steps/min]\u001b[92m17:39:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:28,385 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ " 51%|████████████████████--------------------| 3751/7340 [133:10<127:25, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:30,200 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3752/7340 [133:12<127:23, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:31,850 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3752/7340 [133:13<127:24, 28.2 steps/min]2025-08-11 17:39:32,488 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:39:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:33,789 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 323, 'y': 288})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 323, 'y': 288})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3752/7340 [133:15<127:26, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:34,475 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:39:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:35,135 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3753/7340 [133:17<127:23, 28.2 steps/min]\u001b[92m17:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:36,477 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:39:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:39:37,130 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:39:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3753/7340 [133:18<127:25, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:38,185 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3753/7340 [133:21<127:27, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:39,939 - agent.ComputerAgent - INFO - Computer: click({'x': 327, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 327, 'y': 339})\n",
+ "\u001b[92m17:39:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:40,623 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 741, 'y': 182}, {'x': 745, 'y': 254}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 741, 'y': 182}, {'x': 745, 'y': 254}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:41,905 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -la\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -la\\n'})\n",
+ " 51%|████████████████████--------------------| 3753/7340 [133:23<127:29, 28.1 steps/min]\u001b[92m17:39:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:42,557 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 86, 'y': 373})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 659, 'scroll_x': 0, 'x': 86, 'y': 373})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:43,235 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:39:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3756/7340 [133:25<127:19, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:46,615 - agent.ComputerAgent - INFO - Computer: type({'text': 'lisp'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'lisp'})\n",
+ " 51%|████████████████████--------------------| 3757/7340 [133:28<127:17, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:48,293 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 318})\n",
+ "\u001b[92m17:39:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:49,706 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/reset \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3758/7340 [133:32<127:16, 28.1 steps/min]\u001b[92m17:39:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:39:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:39:50,945 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 287})\n",
+ " 51%|████████████████████--------------------| 3760/7340 [133:33<127:09, 28.2 steps/min]\u001b[92m17:39:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:39:52,133 - agent.ComputerAgent - INFO - Computer: click({'x': 333, 'y': 548})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 333, 'y': 548})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/94463065-a78e-479a-b964-45ad23a48cbb/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3761/7340 [133:35<127:07, 28.2 steps/min]\u001b[92m17:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:39:54,068 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 594, 'scroll_x': 0, 'x': 509, 'y': 419})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 594, 'scroll_x': 0, 'x': 509, 'y': 419})\n",
+ " 51%|████████████████████--------------------| 3762/7340 [133:36<127:04, 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:55,211 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:39:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3763/7340 [133:37<127:00, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]2025-08-11 17:39:56,427 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]\u001b[92m17:39:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3763/7340 [133:39<127:02, 28.2 steps/min]2025-08-11 17:39:57,985 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:39:58,622 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.66s/it] 28.2 steps/min]2025-08-11 17:39:59,264 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "\u001b[92m17:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:01,361 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 51%|████████████████████--------------------| 3763/7340 [133:43<127:06, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:02,234 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:02,925 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:40:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:03,612 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:40:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:04,293 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 171, 'y': 744}, {'x': 48, 'y': 133}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 171, 'y': 744}, {'x': 48, 'y': 133}]})\n",
+ "\u001b[92m17:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3763/7340 [133:46<127:09, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:05,598 - agent.ComputerAgent - INFO - Computer: click({'x': 220, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 220, 'y': 204})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:06,265 - agent.ComputerAgent - INFO - Computer: click({'x': 132, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 132, 'y': 182})\n",
+ "2025-08-11 17:40:06,953 - agent.ComputerAgent - INFO - Computer: click({'x': 623, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 623, 'y': 427})\n",
+ "\u001b[92m17:40:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:07,632 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:40:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3766/7340 [133:49<127:00, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:08,303 - agent.ComputerAgent - INFO - Computer: click({'x': 227, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 227, 'y': 181})\n",
+ "2025-08-11 17:40:08,957 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:40:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 51%|████████████████████--------------------| 3769/7340 [133:50<126:48, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0322563b-daf3-41ae-8a08-f5ecd9282bcc/close \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3770/7340 [133:51<126:45, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3770/7340 [133:52<126:46, 28.2 steps/min]\u001b[92m17:40:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:11,985 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 168})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 168})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3771/7340 [133:54<126:44, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:14,193 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3771/7340 [133:55<126:45, 28.2 steps/min]2025-08-11 17:40:14,852 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<126:46, 28.2 steps/min]2025-08-11 17:40:16,546 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:17,352 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]\u001b[92m17:40:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3771/7340 [133:59<126:49, 28.1 steps/min]\u001b[92m17:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]\u001b[92m17:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3771/7340 [134:00<126:50, 28.1 steps/min]2025-08-11 17:40:19,659 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:40:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:20,547 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "\u001b[92m17:40:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3771/7340 [134:03<126:52, 28.1 steps/min]\u001b[92m17:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3771/7340 [134:05<126:54, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:26,481 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:27,846 - agent.ComputerAgent - INFO - Computer: type({'text': 'Total'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Total'})\n",
+ " 51%|████████████████████--------------------| 3771/7340 [134:09<126:58, 28.1 steps/min]2025-08-11 17:40:28,532 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 104, 'y': 329})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 657, 'scroll_x': 0, 'x': 104, 'y': 329})\n",
+ "2025-08-11 17:40:29,215 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 281})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:30,553 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 286})\n",
+ "\u001b[92m17:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:31,221 - agent.ComputerAgent - INFO - Computer: click({'x': 620, 'y': 570})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 620, 'y': 570})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:32,560 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+end'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+end'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 51%|████████████████████--------------------| 3772/7340 [134:14<126:59, 28.1 steps/min]\u001b[92m17:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:34,525 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:40:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:35,208 - agent.ComputerAgent - INFO - Computer: click({'x': 468, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 468, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 51%|████████████████████--------------------| 3776/7340 [134:17<126:45, 28.1 steps/min]\u001b[92m17:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:36,517 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 327, 'y': 339}, {'x': 249, 'y': 339}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 327, 'y': 339}, {'x': 249, 'y': 339}]})\n",
+ "2025-08-11 17:40:37,193 - agent.ComputerAgent - INFO - Computer: double_click({'x': 473, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 473, 'y': 110})\n",
+ "\u001b[92m17:40:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:37,896 - agent.ComputerAgent - INFO - Computer: click({'x': 884, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 884, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/reset \"HTTP/1.1 200 OK\"\n",
+ " 51%|████████████████████--------------------| 3777/7340 [134:19<126:42, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:39,175 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 154})\n",
+ " 51%|████████████████████--------------------| 3780/7340 [134:20<126:31, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:40:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:40,356 - agent.ComputerAgent - INFO - Computer: double_click({'x': 327, 'y': 550})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 327, 'y': 550})\n",
+ " 52%|████████████████████--------------------| 3781/7340 [134:22<126:28, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 52%|████████████████████--------------------| 3782/7340 [134:23<126:25, 28.1 steps/min]\u001b[92m17:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:42,676 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 182, 'y': 742}, {'x': 114, 'y': 744}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 182, 'y': 742}, {'x': 114, 'y': 744}]})\n",
+ " 52%|████████████████████--------------------| 3783/7340 [134:25<126:23, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:44,313 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:40:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:45,348 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:46,351 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:40:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:47,017 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:40:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:47,656 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:40:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:29<125:33, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:48,996 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f62aade3-59d7-430e-9dc0-5349ac028a82/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:40:49,653 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:31<125:35, 28.2 steps/min]\u001b[92m17:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:40:51,355 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:33<125:37, 28.2 steps/min]2025-08-11 17:40:52,041 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:40:52,701 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/reset \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:34<125:38, 28.2 steps/min]2025-08-11 17:40:53,387 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:40:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:40:54,067 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:40:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:35<125:39, 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]\u001b[92m17:40:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:37<125:41, 28.2 steps/min]2025-08-11 17:40:57,535 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:39<125:42, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:40<125:43, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:40:59,768 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:41<125:45, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:42<125:45, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:44<125:47, 28.2 steps/min]\u001b[92m17:41:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:03,305 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 125, 'y': 182}, {'x': 322, 'y': 287}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 125, 'y': 182}, {'x': 322, 'y': 287}]})\n",
+ "\u001b[92m17:41:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:04,015 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 52%|████████████████████--------------------| 3796/7340 [134:45<125:48, 28.2 steps/min]\u001b[92m17:41:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:04,655 - agent.ComputerAgent - INFO - Computer: double_click({'x': 475, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 475, 'y': 110})\n",
+ "\u001b[92m17:41:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:05,333 - agent.ComputerAgent - INFO - Computer: click({'x': 771, 'y': 570})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 771, 'y': 570})\n",
+ "\u001b[92m17:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3798/7340 [134:47<125:41, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:05,973 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 318})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3800/7340 [134:48<125:34, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:07,823 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:41:07,824 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -667, 'x': 526, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -667, 'x': 526, 'y': 355})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3802/7340 [134:50<125:28, 28.2 steps/min]\u001b[92m17:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:09,686 - agent.ComputerAgent - INFO - Computer: double_click({'x': 256, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 256, 'y': 339})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3802/7340 [134:52<125:30, 28.2 steps/min]\u001b[92m17:41:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:41:11,041 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m17:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:11,703 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:41:11,705 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3803/7340 [134:53<125:27, 28.2 steps/min]2025-08-11 17:41:12,378 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:13,044 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:41:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 52%|████████████████████--------------------| 3805/7340 [134:54<125:20, 28.2 steps/min]2025-08-11 17:41:13,727 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:41:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:14,409 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3805/7340 [134:56<125:21, 28.2 steps/min]2025-08-11 17:41:15,102 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:41:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3805/7340 [134:57<125:22, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:16,948 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m17:41:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3805/7340 [134:58<125:24, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:17,633 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:19,348 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3806/7340 [135:01<125:22, 28.2 steps/min]\u001b[92m17:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:21,372 - agent.ComputerAgent - INFO - Computer: double_click({'x': 503, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 503, 'y': 110})\n",
+ "\u001b[92m17:41:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3807/7340 [135:03<125:19, 28.2 steps/min]2025-08-11 17:41:22,026 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 338})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 338})\n",
+ "2025-08-11 17:41:22,698 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m17:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3808/7340 [135:05<125:17, 28.2 steps/min]2025-08-11 17:41:24,046 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 107, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 107, 'y': 737})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:24,657 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3809/7340 [135:06<125:14, 28.2 steps/min]2025-08-11 17:41:25,354 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 157})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:41:26,017 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3811/7340 [135:07<125:07, 28.2 steps/min]2025-08-11 17:41:26,684 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3812/7340 [135:08<125:04, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3812/7340 [135:11<125:07, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:30,499 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m17:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:31,139 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:41:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:31,832 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:41:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:32,482 - agent.ComputerAgent - INFO - Computer: click({'x': 859, 'y': 226})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 859, 'y': 226})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3812/7340 [135:14<125:10, 28.2 steps/min]\u001b[92m17:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:34,850 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(B2:B12)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(B2:B12)'})\n",
+ "\u001b[92m17:41:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:35,491 - agent.ComputerAgent - INFO - Computer: click({'x': 754, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 754, 'y': 179})\n",
+ "2025-08-11 17:41:36,161 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3814/7340 [135:18<125:05, 28.2 steps/min]\u001b[92m17:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:41:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:38,189 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:38,868 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 203})\n",
+ "\u001b[92m17:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3817/7340 [135:20<124:55, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:39,515 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 131, 'y': 182}, {'x': 324, 'y': 229}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 131, 'y': 182}, {'x': 324, 'y': 229}]})\n",
+ "2025-08-11 17:41:40,188 - agent.ComputerAgent - INFO - Computer: double_click({'x': 398, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 398, 'y': 339})\n",
+ "\u001b[92m17:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3818/7340 [135:21<124:52, 28.2 steps/min]2025-08-11 17:41:40,815 - agent.ComputerAgent - INFO - Computer: double_click({'x': 389, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 389, 'y': 88})\n",
+ "2025-08-11 17:41:41,492 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:41:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3820/7340 [135:23<124:45, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3821/7340 [135:24<124:42, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:41:43,332 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m17:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:44,009 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 237})\n",
+ " 52%|████████████████████--------------------| 3821/7340 [135:25<124:43, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:41:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3822/7340 [135:27<124:40, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:41:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:46,418 - agent.ComputerAgent - INFO - Computer: click({'x': 952, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 952, 'y': 750})\n",
+ "\u001b[92m17:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3823/7340 [135:28<124:38, 28.2 steps/min]2025-08-11 17:41:47,729 - agent.ComputerAgent - INFO - Computer: click({'x': 943, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 943, 'y': 616})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:48,400 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m17:41:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:49,089 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:41:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3824/7340 [135:31<124:36, 28.2 steps/min]\u001b[92m17:41:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:50,393 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:51,065 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:51,762 - agent.ComputerAgent - INFO - Computer: double_click({'x': 300, 'y': 538})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 300, 'y': 538})\n",
+ "2025-08-11 17:41:52,429 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:53,790 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:41:53,791 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3826/7340 [135:35<124:32, 28.2 steps/min]2025-08-11 17:41:54,763 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:55,434 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:56,124 - agent.ComputerAgent - INFO - Computer: double_click({'x': 219, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 219, 'y': 204})\n",
+ "2025-08-11 17:41:56,791 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 52%|████████████████████--------------------| 3827/7340 [135:38<124:30, 28.2 steps/min]\u001b[92m17:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:41:57,479 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3828/7340 [135:39<124:27, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:41:58,839 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:41:59,485 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:41:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:00,166 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 649, 'scroll_x': 0, 'x': 105, 'y': 329})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 649, 'scroll_x': 0, 'x': 105, 'y': 329})\n",
+ " 52%|████████████████████--------------------| 3828/7340 [135:41<124:29, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6543d2df-ad27-4301-babf-39cf80a164f3/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:00,792 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m17:42:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3829/7340 [135:42<124:26, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 52%|████████████████████--------------------| 3829/7340 [135:43<124:27, 28.2 steps/min]\u001b[92m17:42:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:03,142 - agent.ComputerAgent - INFO - Computer: click({'x': 133, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 133, 'y': 155})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3830/7340 [135:44<124:24, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:03,822 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:42:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:04,461 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:05,109 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:42:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3831/7340 [135:46<124:22, 28.2 steps/min]2025-08-11 17:42:06,299 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m17:42:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3831/7340 [135:48<124:23, 28.2 steps/min]\u001b[92m17:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:07,590 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:08,285 - agent.ComputerAgent - INFO - Computer: double_click({'x': 381, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 381, 'y': 103})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3831/7340 [135:50<124:24, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:42:09,460 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3833/7340 [135:51<124:17, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:10,107 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m17:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 52%|████████████████████--------------------| 3833/7340 [135:53<124:19, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3834/7340 [135:54<124:16, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:14,152 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3834/7340 [135:56<124:18, 28.2 steps/min]2025-08-11 17:42:15,459 - agent.ComputerAgent - INFO - Computer: click({'x': 968, 'y': 752})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 968, 'y': 752})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:16,140 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:17,420 - agent.ComputerAgent - INFO - Computer: click({'x': 185, 'y': 241, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 185, 'y': 241, 'button': 'left'})\n",
+ "2025-08-11 17:42:18,068 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3834/7340 [136:00<124:22, 28.2 steps/min]2025-08-11 17:42:19,380 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 388})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 388})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:20,712 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 17:42:21,391 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:42:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:42:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3837/7340 [136:04<124:13, 28.2 steps/min]\u001b[92m17:42:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:42:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:23,400 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:42:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:24,059 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 203})\n",
+ "\u001b[92m17:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2a28af1e-e61d-489c-a18e-23c5071c9aff/close \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3840/7340 [136:05<124:02, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:25,077 - agent.ComputerAgent - INFO - Computer: click({'x': 883, 'y': 388})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 883, 'y': 388})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3841/7340 [136:07<124:00, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:27,080 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m17:42:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:27,772 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 398, 'y': 339}, {'x': 232, 'y': 339}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 398, 'y': 339}, {'x': 232, 'y': 339}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:42:29,441 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3842/7340 [136:11<123:59, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 17:42:30,110 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:42:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:30,780 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:42:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it] 28.2 steps/min]2025-08-11 17:42:31,914 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:42:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3845/7340 [136:14<123:50, 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 17:42:33,780 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:42:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3845/7340 [136:16<123:51, 28.2 steps/min]\u001b[92m17:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:35,150 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:36,500 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 52%|████████████████████--------------------| 3845/7340 [136:18<123:53, 28.2 steps/min]\u001b[92m17:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:37,124 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:42:37,125 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 651})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 651})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:37,791 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 52%|████████████████████--------------------| 3846/7340 [136:19<123:50, 28.2 steps/min]2025-08-11 17:42:38,490 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 135})\n",
+ "2025-08-11 17:42:39,128 - agent.ComputerAgent - INFO - Computer: double_click({'x': 345, 'y': 86})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 345, 'y': 86})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:41,099 - agent.ComputerAgent - INFO - Computer: type({'text': '=VLOOKUP(E11,$D$2:$E$7,2,TRUE)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=VLOOKUP(E11,$D$2:$E$7,2,TRUE)'})\n",
+ " 52%|████████████████████--------------------| 3847/7340 [136:22<123:49, 28.2 steps/min]2025-08-11 17:42:41,771 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:42,464 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:42:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:43,101 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:42:43,747 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 52%|████████████████████--------------------| 3850/7340 [136:25<123:40, 28.2 steps/min]2025-08-11 17:42:44,752 - agent.ComputerAgent - INFO - Computer: click({'x': 956, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 956, 'y': 750})\n",
+ " 52%|████████████████████--------------------| 3850/7340 [136:26<123:41, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3851/7340 [136:28<123:38, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:46,986 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m17:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:47,675 - agent.ComputerAgent - INFO - Computer: click({'x': 757, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 757, 'y': 437})\n",
+ " 52%|████████████████████--------------------| 3851/7340 [136:29<123:39, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:49,994 - agent.ComputerAgent - INFO - Computer: click({'x': 150, 'y': 219, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 150, 'y': 219, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 52%|████████████████████--------------------| 3853/7340 [136:32<123:34, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:51,336 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m17:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:52,606 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3854/7340 [136:34<123:31, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:53,653 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 556})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 556})\n",
+ "2025-08-11 17:42:54,317 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3854/7340 [136:36<123:33, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:54,976 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:55,636 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "2025-08-11 17:42:56,289 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:42:57,597 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 53%|█████████████████████-------------------| 3856/7340 [136:39<123:28, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:42:58,907 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3857/7340 [136:40<123:25, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:42:59,577 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:00,252 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3857/7340 [136:42<123:26, 28.2 steps/min]2025-08-11 17:43:00,934 - agent.ComputerAgent - INFO - Computer: click({'x': 919, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 919, 'y': 335})\n",
+ "2025-08-11 17:43:01,595 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m17:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3857/7340 [136:43<123:27, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:02,914 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 53%|█████████████████████-------------------| 3858/7340 [136:44<123:25, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:04,251 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3860/7340 [136:46<123:18, 28.2 steps/min]2025-08-11 17:43:05,587 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:07,007 - agent.ComputerAgent - INFO - Computer: double_click({'x': 771, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 771, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3860/7340 [136:50<123:22, 28.2 steps/min]\u001b[92m17:43:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:09,612 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m17:43:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:10,289 - agent.ComputerAgent - INFO - Computer: move({'x': 139, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 139, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:10,943 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:43:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3861/7340 [136:52<123:20, 28.2 steps/min]\u001b[92m17:43:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:12,780 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:43:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:13,462 - agent.ComputerAgent - INFO - Computer: double_click({'x': 354, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 354, 'y': 339})\n",
+ "2025-08-11 17:43:14,132 - agent.ComputerAgent - INFO - Computer: click({'x': 956, 'y': 752})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 956, 'y': 752})\n",
+ "\u001b[92m17:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 53%|█████████████████████-------------------| 3863/7340 [136:56<123:15, 28.2 steps/min]2025-08-11 17:43:15,494 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 35})\n",
+ "\u001b[92m17:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:16,808 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:43:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:17,491 - agent.ComputerAgent - INFO - Computer: double_click({'x': 414, 'y': 86})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 414, 'y': 86})\n",
+ " 53%|█████████████████████-------------------| 3865/7340 [136:59<123:09, 28.2 steps/min]\u001b[92m17:43:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:18,807 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 150})\n",
+ "\u001b[92m17:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3867/7340 [137:00<123:02, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:19,494 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ "\u001b[92m17:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:20,135 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 469, 'y': 742}, {'x': 203, 'y': 744}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 469, 'y': 742}, {'x': 203, 'y': 744}]})\n",
+ " 53%|█████████████████████-------------------| 3870/7340 [137:02<122:52, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:21,822 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m17:43:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3870/7340 [137:03<122:53, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 53%|█████████████████████-------------------| 3871/7340 [137:04<122:50, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:24,193 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3871/7340 [137:06<122:51, 28.2 steps/min]2025-08-11 17:43:25,238 - agent.ComputerAgent - INFO - Computer: click({'x': 109, 'y': 684})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 109, 'y': 684})\n",
+ "2025-08-11 17:43:25,922 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m17:43:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3871/7340 [137:07<122:53, 28.2 steps/min]2025-08-11 17:43:26,573 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:43:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:27,253 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:43:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:27,893 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:43:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:28,553 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:43:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 53%|█████████████████████-------------------| 3873/7340 [137:10<122:47, 28.2 steps/min]2025-08-11 17:43:29,204 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:43:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:29,865 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:43:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:31,190 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ " 53%|█████████████████████-------------------| 3873/7340 [137:12<122:49, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:33,854 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:43:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3873/7340 [137:15<122:52, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:34,557 - agent.ComputerAgent - INFO - Computer: click({'x': 226, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 226, 'y': 182})\n",
+ "\u001b[92m17:43:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:35,187 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 75})\n",
+ " 53%|█████████████████████-------------------| 3873/7340 [137:16<122:53, 28.2 steps/min]2025-08-11 17:43:35,812 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:36,493 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m17:43:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3875/7340 [137:18<122:46, 28.2 steps/min]2025-08-11 17:43:37,187 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3875/7340 [137:19<122:47, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:38,991 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 141})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 53%|█████████████████████-------------------| 3876/7340 [137:21<122:45, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:40,358 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:41,716 - agent.ComputerAgent - INFO - Computer: click({'x': 53, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 53, 'y': 133})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3877/7340 [137:23<122:43, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:42,339 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:43,025 - agent.ComputerAgent - INFO - Computer: click({'x': 331, 'y': 549})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 331, 'y': 549})\n",
+ " 53%|█████████████████████-------------------| 3878/7340 [137:24<122:40, 28.2 steps/min]2025-08-11 17:43:43,695 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:43:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3879/7340 [137:25<122:37, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:46,729 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3879/7340 [137:29<122:40, 28.2 steps/min]\u001b[92m17:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:48,659 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:43:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3879/7340 [137:31<122:41, 28.2 steps/min]2025-08-11 17:43:49,962 - agent.ComputerAgent - INFO - Computer: click({'x': 799, 'y': 614})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 799, 'y': 614})\n",
+ "2025-08-11 17:43:50,611 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:51,936 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 37})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 37})\n",
+ "\u001b[92m17:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:53,279 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 53%|█████████████████████-------------------| 3879/7340 [137:35<122:45, 28.2 steps/min]\u001b[92m17:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:53,920 - agent.ComputerAgent - INFO - Computer: double_click({'x': 582, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 582, 'y': 105})\n",
+ "2025-08-11 17:43:54,569 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:43:55,895 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ " 53%|█████████████████████-------------------| 3881/7340 [137:37<122:39, 28.2 steps/min]\u001b[92m17:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:56,588 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 321})\n",
+ "\u001b[92m17:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:43:57,610 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:43:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:43:58,263 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 227, 'y': 339}, {'x': 266, 'y': 340}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 227, 'y': 339}, {'x': 266, 'y': 340}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:43:59,584 - agent.ComputerAgent - INFO - Computer: type({'text': 'gimp ~/Desktop/cola.png &\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'gimp ~/Desktop/cola.png &\\n'})\n",
+ "2025-08-11 17:44:00,268 - agent.ComputerAgent - INFO - Computer: click({'x': 771, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 771, 'y': 178})\n",
+ " 53%|█████████████████████-------------------| 3882/7340 [137:42<122:39, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3886/7340 [137:43<122:24, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0e559cf1-b9b2-47a8-a680-559ea84d6048/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:02,783 - agent.ComputerAgent - INFO - Computer: click({'x': 500, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 500, 'y': 347})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3886/7340 [137:45<122:26, 28.2 steps/min]\u001b[92m17:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:46<122:23, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:44:05,620 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:47<122:24, 28.2 steps/min]2025-08-11 17:44:06,258 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 17:44:07,079 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:44:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:44:07,740 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:44:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:49<122:26, 28.2 steps/min]2025-08-11 17:44:08,525 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:44:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "2025-08-11 17:44:09,851 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:51<122:28, 28.2 steps/min]2025-08-11 17:44:10,538 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:44:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:52<122:29, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:11,845 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:12,511 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 154})\n",
+ "\u001b[92m17:44:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3887/7340 [137:54<122:30, 28.2 steps/min]\u001b[92m17:44:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:13,844 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 286})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3888/7340 [137:56<122:28, 28.2 steps/min]\u001b[92m17:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:15,150 - agent.ComputerAgent - INFO - Computer: click({'x': 731, 'y': 161})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 731, 'y': 161})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:44:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:16,489 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 194})\n",
+ " 53%|█████████████████████-------------------| 3889/7340 [137:58<122:25, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:17,646 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 111})\n",
+ " 53%|█████████████████████-------------------| 3892/7340 [138:00<122:15, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3892/7340 [138:02<122:17, 28.2 steps/min]\u001b[92m17:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:21,479 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3892/7340 [138:03<122:18, 28.2 steps/min]2025-08-11 17:44:22,129 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:24,063 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:25,366 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3893/7340 [138:07<122:17, 28.2 steps/min]\u001b[92m17:44:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:44:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:26,655 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 416})\n",
+ "2025-08-11 17:44:27,288 - agent.ComputerAgent - INFO - Computer: double_click({'x': 499, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 499, 'y': 347})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:28,609 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:44:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3894/7340 [138:10<122:17, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:29,870 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:31,177 - agent.ComputerAgent - INFO - Computer: type({'text': 'B2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'B2'})\n",
+ "2025-08-11 17:44:31,860 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 40})\n",
+ "\u001b[92m17:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3896/7340 [138:13<122:11, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:32,496 - agent.ComputerAgent - INFO - Computer: double_click({'x': 430, 'y': 342})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 430, 'y': 342})\n",
+ "\u001b[92m17:44:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:33,168 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 128, 'y': 320}, {'x': 342, 'y': 320}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 128, 'y': 320}, {'x': 342, 'y': 320}]})\n",
+ " 53%|█████████████████████-------------------| 3898/7340 [138:14<122:04, 28.2 steps/min]2025-08-11 17:44:34,278 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:44:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:44:35,650 - agent.ComputerAgent - INFO - Computer: type({'text': '=LOOKUP(E11;{0;30;60;80;90;100};{\"Fail\";\"Pass\";\"Average\";\"Above average\";\"Excellent\";\"Exceptional!\"})'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=LOOKUP(E11;{0;30;60;80;90;100};{\"Fail\";\"Pass\";\"Average\";\"Above average\";\"Excellent\";\"Exceptional!\"})'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3900/7340 [138:18<121:59, 28.2 steps/min]\u001b[92m17:44:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:37,442 - agent.ComputerAgent - INFO - Computer: click({'x': 756, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 756, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3901/7340 [138:19<121:56, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:44:38,619 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:44:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3902/7340 [138:21<121:53, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:39,940 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:44:40,634 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 77})\n",
+ "2025-08-11 17:44:41,646 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:44:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:44:42,320 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:44:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3902/7340 [138:24<121:56, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:44:45,805 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:44:46,502 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:44:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:44:47,131 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3906/7340 [138:29<121:45, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:48,441 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:44:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:44:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:50,562 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 388})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 388})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:44:51,872 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 53%|█████████████████████-------------------| 3906/7340 [138:33<121:48, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a5f69ad6-9361-4670-b101-61761113341c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:53,223 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3908/7340 [138:36<121:43, 28.2 steps/min]\u001b[92m17:44:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:44:56,237 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 17:44:56,896 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 157})\n",
+ "\u001b[92m17:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/03201a42-df17-4896-9367-120fd49d3bb7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3908/7340 [138:39<121:45, 28.2 steps/min]2025-08-11 17:44:58,212 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:44:59,553 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it] 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3910/7340 [138:44<121:42, 28.2 steps/min]\u001b[92m17:45:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:45:03,309 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:45:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 28.2 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3910/7340 [138:46<121:44, 28.2 steps/min]2025-08-11 17:45:05,257 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:45:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:05,915 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:45:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3910/7340 [138:47<121:45, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:06,608 - agent.ComputerAgent - INFO - Computer: click({'x': 773, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 773, 'y': 189})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3910/7340 [138:49<121:47, 28.2 steps/min]\u001b[92m17:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:08,586 - agent.ComputerAgent - INFO - Computer: double_click({'x': 694, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 694, 'y': 95})\n",
+ "2025-08-11 17:45:09,281 - agent.ComputerAgent - INFO - Computer: move({'x': 342, 'y': 322})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 342, 'y': 322})\n",
+ " 53%|█████████████████████-------------------| 3911/7340 [138:51<121:44, 28.2 steps/min]\u001b[92m17:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:09,917 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:11,872 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 659})\n",
+ "2025-08-11 17:45:12,525 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3913/7340 [138:54<121:39, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:14,185 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:45:15,461 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m17:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3915/7340 [138:57<121:33, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:16,142 - agent.ComputerAgent - INFO - Computer: click({'x': 144, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 144, 'y': 152})\n",
+ "2025-08-11 17:45:16,804 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:45:16,805 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/050a0934-63e8-46a0-8868-de32b28174ef/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:45:18,134 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ " 53%|█████████████████████-------------------| 3916/7340 [138:59<121:32, 28.2 steps/min]\u001b[92m17:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:19,458 - agent.ComputerAgent - INFO - Computer: double_click({'x': 306, 'y': 341})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 306, 'y': 341})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<121:25, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3919/7340 [139:03<121:22, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it] 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:45:24,215 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:45:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]2025-08-11 17:45:25,615 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "\u001b[92m17:45:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3919/7340 [139:07<121:26, 28.2 steps/min]2025-08-11 17:45:26,767 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:45:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3919/7340 [139:08<121:27, 28.2 steps/min]2025-08-11 17:45:27,657 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:45:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:45:28,303 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:45:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3919/7340 [139:10<121:29, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:30,737 - agent.ComputerAgent - INFO - Computer: click({'x': 304, 'y': 221, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 304, 'y': 221, 'button': 'left'})\n",
+ "\u001b[92m17:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/479a3737-3ad4-48da-b73f-c8ea6e38d096/close \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3919/7340 [139:12<121:31, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:31,381 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:32,050 - agent.ComputerAgent - INFO - Computer: double_click({'x': 422, 'y': 408})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 422, 'y': 408})\n",
+ "\u001b[92m17:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:32,703 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:45:33,385 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:45:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:45:35,069 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/invoke \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3920/7340 [139:16<121:30, 28.1 steps/min]\u001b[92m17:45:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:36,135 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 764, 'y': 182}, {'x': 748, 'y': 254}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 764, 'y': 182}, {'x': 748, 'y': 254}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<121:24, 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3bde0e0-c60f-4177-b7dd-15e361558126/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it] 28.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:45:40,539 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]\u001b[92m17:45:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3923/7340 [139:23<121:24, 28.1 steps/min]\u001b[92m17:45:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:45:43,106 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:45:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3923/7340 [139:24<121:25, 28.1 steps/min]2025-08-11 17:45:44,115 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 53%|█████████████████████-------------------| 3923/7340 [139:25<121:26, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.81s/it] 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3923/7340 [139:29<121:29, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.48s/it] 28.1 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3bde46c9-685b-4102-9ef4-a1535d5fcc85/close \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3924/7340 [139:31<121:27, 28.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 53%|█████████████████████-------------------| 3924/7340 [139:33<121:29, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3924/7340 [139:34<121:30, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<121:31, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:45:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3924/7340 [139:37<121:32, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]\u001b[92m17:45:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 53%|█████████████████████-------------------| 3924/7340 [139:38<121:34, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/73646789-482d-4b1c-8ec1-5a943d563fab/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it] 28.1 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3934/7340 [139:40<120:56, 28.2 steps/min]\u001b[92m17:45:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:45:59,893 - agent.ComputerAgent - INFO - Computer: click({'x': 602, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 602, 'y': 194})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/789598ee-3628-40d3-8b82-0c53827a32c1/close \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3934/7340 [139:41<120:56, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 54%|█████████████████████-------------------| 3935/7340 [139:42<120:53, 28.2 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:02,250 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/127b9298-d3cc-4b90-8567-e45146efa729/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3935/7340 [139:43<120:54, 28.2 steps/min]2025-08-11 17:46:03,566 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:46:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3936/7340 [139:45<120:51, 28.2 steps/min]\u001b[92m17:46:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:04,263 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 239})\n",
+ " 54%|█████████████████████-------------------| 3936/7340 [139:46<120:52, 28.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 54%|█████████████████████-------------------| 3937/7340 [139:47<120:49, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:06,435 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 318})\n",
+ "2025-08-11 17:46:07,080 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:46:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3937/7340 [139:48<120:50, 28.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:08,264 - agent.ComputerAgent - INFO - Computer: click({'x': 799, 'y': 613})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 799, 'y': 613})\n",
+ " 54%|█████████████████████-------------------| 3938/7340 [139:49<120:48, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:09,937 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:46:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3939/7340 [139:52<120:46, 28.2 steps/min]\u001b[92m17:46:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:46:12,797 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 387})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 387})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]\u001b[92m17:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]2025-08-11 17:46:15,205 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+z'})\n",
+ " 54%|█████████████████████-------------------| 3939/7340 [139:56<120:50, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:15,880 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:46:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]2025-08-11 17:46:16,768 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ " 54%|█████████████████████-------------------| 3940/7340 [139:59<120:48, 28.1 steps/min]\u001b[92m17:46:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:18,321 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:19,357 - agent.ComputerAgent - INFO - Computer: double_click({'x': 420, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 420, 'y': 415})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:20,031 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3940/7340 [140:01<120:50, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:20,716 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:46:20,716 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 636})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 636})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:46:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:22,038 - agent.ComputerAgent - INFO - Computer: click({'x': 82, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 82, 'y': 331})\n",
+ " 54%|█████████████████████-------------------| 3941/7340 [140:03<120:48, 28.1 steps/min]\u001b[92m17:46:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:22,724 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:23,390 - agent.ComputerAgent - INFO - Computer: click({'x': 414, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 414, 'y': 75})\n",
+ " 54%|█████████████████████-------------------| 3945/7340 [140:07<120:35, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:26,094 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:46:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/reset \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3945/7340 [140:08<120:35, 28.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3945/7340 [140:09<120:37, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:28,406 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:29,100 - agent.ComputerAgent - INFO - Computer: click({'x': 345, 'y': 66})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 345, 'y': 66})\n",
+ "2025-08-11 17:46:29,756 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:46:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3945/7340 [140:11<120:38, 28.1 steps/min]2025-08-11 17:46:30,435 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:46:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3946/7340 [140:13<120:36, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:32,454 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:46:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:33,104 - agent.ComputerAgent - INFO - Computer: click({'x': 267, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 267, 'y': 416})\n",
+ "2025-08-11 17:46:33,755 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3946/7340 [140:15<120:38, 28.1 steps/min]\u001b[92m17:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:34,926 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 130, 'y': 322}, {'x': 352, 'y': 318}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 130, 'y': 322}, {'x': 352, 'y': 318}]})\n",
+ " 54%|█████████████████████-------------------| 3947/7340 [140:16<120:35, 28.1 steps/min]2025-08-11 17:46:35,611 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:46:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:46:36,284 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:46:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3948/7340 [140:18<120:32, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3948/7340 [140:19<120:33, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:38,387 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:46:38,388 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 13})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:39,072 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:41,058 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+z'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+z'})\n",
+ " 54%|█████████████████████-------------------| 3948/7340 [140:22<120:36, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:42,097 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:42,767 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 54%|█████████████████████-------------------| 3949/7340 [140:24<120:34, 28.1 steps/min]\u001b[92m17:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:46:43,463 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -1401, 'x': 526, 'y': 434})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -1401, 'x': 526, 'y': 434})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:44,765 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:46:44,765 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:46,098 - agent.ComputerAgent - INFO - Computer: type({'text': 'Zoom Chrome Extension'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Zoom Chrome Extension'})\n",
+ " 54%|█████████████████████-------------------| 3950/7340 [140:27<120:32, 28.1 steps/min]2025-08-11 17:46:46,777 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:46:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3952/7340 [140:28<120:25, 28.1 steps/min]2025-08-11 17:46:47,959 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:46:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3952/7340 [140:29<120:26, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:46:48,648 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m17:46:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:46:49,301 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:46:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3952/7340 [140:31<120:27, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3952/7340 [140:32<120:28, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ab746d73-0661-41f7-b989-ce2eb2890384/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 54%|█████████████████████-------------------| 3953/7340 [140:33<120:25, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:52,532 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ "\u001b[92m17:46:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3953/7340 [140:34<120:26, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:46:53,215 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:46:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3953/7340 [140:35<120:27, 28.1 steps/min]2025-08-11 17:46:54,401 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:46:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 54%|█████████████████████-------------------| 3954/7340 [140:36<120:24, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3954/7340 [140:37<120:25, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:56,279 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m17:46:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:46:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:46:58,201 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+n'})\n",
+ " 54%|█████████████████████-------------------| 3954/7340 [140:39<120:27, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:46:58,831 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:01,518 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:47:02,208 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ " 54%|█████████████████████-------------------| 3955/7340 [140:43<120:27, 28.1 steps/min]\u001b[92m17:47:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:02,881 - agent.ComputerAgent - INFO - Computer: click({'x': 267, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 267, 'y': 420})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:47:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:05,286 - agent.ComputerAgent - INFO - Computer: type({'text': '=IF(E11<30,\"Fail\",IF(E11<60,\"Pass\",IF(E11<80,\"Average\",IF(E11<90,\"Above average\",IF(E11<100,\"Excellent\",\"Exceptional!\")))))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=IF(E11<30,\"Fail\",IF(E11<60,\"Pass\",IF(E11<80,\"Average\",IF(E11<90,\"Above average\",IF(E11<100,\"Excellent\",\"Exceptional!\")))))'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:06,607 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:47:06,608 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 54%|█████████████████████-------------------| 3957/7340 [140:48<120:22, 28.1 steps/min]2025-08-11 17:47:07,318 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 268})\n",
+ "2025-08-11 17:47:07,971 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:09,979 - agent.ComputerAgent - INFO - Computer: click({'x': 527, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 527, 'y': 91})\n",
+ " 54%|█████████████████████-------------------| 3960/7340 [140:51<120:13, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:11,276 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3963/7340 [140:53<120:03, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:12,574 - agent.ComputerAgent - INFO - Computer: click({'x': 129, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 129, 'y': 318})\n",
+ "\u001b[92m17:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:13,230 - agent.ComputerAgent - INFO - Computer: click({'x': 144, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 144, 'y': 151})\n",
+ "\u001b[92m17:47:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3964/7340 [140:54<120:00, 28.1 steps/min]2025-08-11 17:47:13,877 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 322, 'y': 219}, {'x': 543, 'y': 261}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 322, 'y': 219}, {'x': 543, 'y': 261}]})\n",
+ " 54%|█████████████████████-------------------| 3966/7340 [140:55<119:53, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a1613365-876e-432c-9025-bb7d464c9014/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:15,018 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m17:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3967/7340 [140:56<119:50, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 54%|█████████████████████-------------------| 3968/7340 [140:57<119:47, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3968/7340 [140:58<119:48, 28.1 steps/min]2025-08-11 17:47:17,696 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:18,333 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m17:47:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3968/7340 [141:00<119:49, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:18,982 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:47:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:20,403 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3968/7340 [141:02<119:51, 28.1 steps/min]2025-08-11 17:47:21,040 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:47:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 17:47:22,485 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:47:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:23,177 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:47:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:23,858 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:47:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:24,550 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:26,310 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3969/7340 [141:08<119:52, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:27,631 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:47:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:28,310 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:47:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:47:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3969/7340 [141:10<119:53, 28.1 steps/min]2025-08-11 17:47:28,969 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:47:28,969 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 64})\n",
+ "2025-08-11 17:47:29,597 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:47:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:30,641 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3969/7340 [141:12<119:55, 28.1 steps/min]2025-08-11 17:47:31,305 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:31,941 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3970/7340 [141:13<119:53, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:32,583 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m17:47:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:33,228 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:47:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:34,525 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:35,900 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 54%|█████████████████████-------------------| 3970/7340 [141:17<119:56, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 54%|█████████████████████-------------------| 3973/7340 [141:18<119:45, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:38,316 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m17:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:47:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/749cb05b-d08c-4e9f-929b-3504313826a5/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3973/7340 [141:21<119:47, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:40,670 - agent.ComputerAgent - INFO - Computer: click({'x': 943, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 943, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3973/7340 [141:22<119:48, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:42,351 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:47:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:47:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 54%|█████████████████████-------------------| 3975/7340 [141:24<119:42, 28.1 steps/min]\u001b[92m17:47:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:43,013 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:47:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:43,676 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:47:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:47:44,336 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:47:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 54%|█████████████████████-------------------| 3975/7340 [141:26<119:44, 28.1 steps/min]\u001b[92m17:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:45,645 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 136, 'y': 322}, {'x': 352, 'y': 318}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 136, 'y': 322}, {'x': 352, 'y': 318}]})\n",
+ "\u001b[92m17:47:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:46,917 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 15, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 15, 'y': 477})\n",
+ " 54%|█████████████████████-------------------| 3975/7340 [141:28<119:45, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:47:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:47,589 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:47:47,590 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 12, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3977/7340 [141:30<119:39, 28.1 steps/min]\u001b[92m17:47:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:49,533 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3978/7340 [141:31<119:36, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:51,248 - agent.ComputerAgent - INFO - Computer: click({'x': 859, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 859, 'y': 166})\n",
+ "2025-08-11 17:47:51,880 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 595})\n",
+ "\u001b[92m17:47:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:47:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3978/7340 [141:34<119:39, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:53,879 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 616})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:47:54,557 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:55,257 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:47:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3981/7340 [141:37<119:29, 28.1 steps/min]2025-08-11 17:47:55,915 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 266})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 266})\n",
+ "\u001b[92m17:47:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:47:56,979 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:47:56,979 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 32})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ " 54%|█████████████████████-------------------| 3984/7340 [141:39<119:19, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:47:58,625 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m17:47:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:47:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3984/7340 [141:41<119:21, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:00,684 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 62})\n",
+ "\u001b[92m17:48:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3984/7340 [141:42<119:22, 28.1 steps/min]2025-08-11 17:48:01,340 - agent.ComputerAgent - INFO - Computer: click({'x': 394, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 394, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3986/7340 [141:43<119:15, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:02,665 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:03,341 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 510})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 510})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3987/7340 [141:46<119:13, 28.1 steps/min]\u001b[92m17:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:05,034 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:05,696 - agent.ComputerAgent - INFO - Computer: click({'x': 835, 'y': 501})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 835, 'y': 501})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3988/7340 [141:47<119:10, 28.1 steps/min]2025-08-11 17:48:06,345 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:06,984 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m17:48:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3989/7340 [141:48<119:07, 28.1 steps/min]2025-08-11 17:48:07,653 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:48:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:48:08,298 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3989/7340 [141:50<119:08, 28.1 steps/min]2025-08-11 17:48:08,965 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:10,388 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:48:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3990/7340 [141:53<119:07, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:12,325 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:13,361 - agent.ComputerAgent - INFO - Computer: click({'x': 861, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 861, 'y': 197})\n",
+ "\u001b[92m17:48:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3990/7340 [141:55<119:09, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:14,038 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 593, 'scroll_x': 0, 'x': 530, 'y': 310})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 593, 'scroll_x': 0, 'x': 530, 'y': 310})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:14,715 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:48:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3991/7340 [141:56<119:06, 28.1 steps/min]2025-08-11 17:48:15,377 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:48:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:16,026 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m17:48:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3992/7340 [141:57<119:03, 28.1 steps/min]2025-08-11 17:48:16,704 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:48:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 54%|█████████████████████-------------------| 3992/7340 [141:58<119:04, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:19,424 - agent.ComputerAgent - INFO - Computer: type({'text': 'code ~/Desktop/project\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'code ~/Desktop/project\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:22,060 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:23,343 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3993/7340 [142:05<119:06, 28.1 steps/min]\u001b[92m17:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:25,044 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:48:25,732 - agent.ComputerAgent - INFO - Computer: click({'x': 404, 'y': 593})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 404, 'y': 593})\n",
+ "2025-08-11 17:48:26,406 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 427})\n",
+ "\u001b[92m17:48:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 54%|█████████████████████-------------------| 3995/7340 [142:08<119:01, 28.1 steps/min]\u001b[92m17:48:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:27,696 - agent.ComputerAgent - INFO - Computer: click({'x': 178, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 178, 'y': 318})\n",
+ "2025-08-11 17:48:28,339 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:29,701 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 629})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:48:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3997/7340 [142:12<118:56, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:48:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:31,058 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:32,391 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://chromewebstore.google.com/'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://chromewebstore.google.com/'})\n",
+ "2025-08-11 17:48:33,031 - agent.ComputerAgent - INFO - Computer: click({'x': 799, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 799, 'y': 616})\n",
+ "2025-08-11 17:48:33,704 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 268})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 54%|█████████████████████-------------------| 3999/7340 [142:16<118:51, 28.1 steps/min]\u001b[92m17:48:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:35,018 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 35})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 55%|█████████████████████-------------------| 4002/7340 [142:17<118:40, 28.1 steps/min]\u001b[92m17:48:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:36,245 - agent.ComputerAgent - INFO - Computer: click({'x': 564, 'y': 498})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 564, 'y': 498})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:36,962 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m17:48:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4003/7340 [142:19<118:38, 28.1 steps/min]\u001b[92m17:48:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:48:38,987 - agent.ComputerAgent - INFO - Computer: click({'x': 146, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 146, 'y': 151})\n",
+ " 55%|█████████████████████-------------------| 4004/7340 [142:20<118:35, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4005/7340 [142:21<118:32, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:42,453 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:48:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4006/7340 [142:24<118:30, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:43,100 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:44,180 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:48:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4006/7340 [142:25<118:32, 28.1 steps/min]2025-08-11 17:48:44,808 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:48:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:48:45,490 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:48:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:48:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4006/7340 [142:27<118:33, 28.1 steps/min]2025-08-11 17:48:46,809 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:48:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:48:47,463 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:48:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:48:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4006/7340 [142:29<118:35, 28.1 steps/min]2025-08-11 17:48:48,502 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 212})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 212})\n",
+ "2025-08-11 17:48:49,193 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:48:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4006/7340 [142:30<118:36, 28.1 steps/min]2025-08-11 17:48:50,150 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m17:48:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:48:51,221 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:48:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|█████████████████████-------------------| 4007/7340 [142:32<118:34, 28.1 steps/min]2025-08-11 17:48:51,885 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:48:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:48:52,570 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|█████████████████████-------------------| 4007/7340 [142:34<118:35, 28.1 steps/min]2025-08-11 17:48:53,230 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:48:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|█████████████████████-------------------| 4007/7340 [142:37<118:37, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 55%|█████████████████████-------------------| 4008/7340 [142:38<118:34, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:57,932 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m17:48:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|█████████████████████-------------------| 4008/7340 [142:39<118:35, 28.1 steps/min]2025-08-11 17:48:58,622 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:48:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:48:59,913 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(C2:C12)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(C2:C12)'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:49:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4009/7340 [142:42<118:34, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:02,379 - agent.ComputerAgent - INFO - Computer: type({'text': '-30'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '-30'})\n",
+ "\u001b[92m17:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4010/7340 [142:44<118:32, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:03,678 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m17:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:04,337 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 308})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 308})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:05,691 - agent.ComputerAgent - INFO - Computer: click({'x': 129, 'y': 284, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 129, 'y': 284, 'button': 'left'})\n",
+ " 55%|█████████████████████-------------------| 4011/7340 [142:47<118:30, 28.1 steps/min]\u001b[92m17:49:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:06,990 - agent.ComputerAgent - INFO - Computer: click({'x': 392, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 392, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:49:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4014/7340 [142:50<118:21, 28.1 steps/min]\u001b[92m17:49:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:10,283 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:10,964 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:11,691 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 17, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 17, 'y': 237})\n",
+ "\u001b[92m17:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4015/7340 [142:54<118:20, 28.1 steps/min]\u001b[92m17:49:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:14,162 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:14,848 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 595})\n",
+ "2025-08-11 17:49:15,561 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:16,210 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:49:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:16,882 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 243})\n",
+ " 55%|█████████████████████-------------------| 4016/7340 [142:58<118:20, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:49:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:17,626 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 37})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 37})\n",
+ "\u001b[92m17:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:18,352 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 150})\n",
+ " 55%|█████████████████████-------------------| 4021/7340 [143:01<118:02, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:19,995 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m17:49:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|█████████████████████-------------------| 4021/7340 [143:03<118:04, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:22,833 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:49:22,835 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 179})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 55%|█████████████████████-------------------| 4022/7340 [143:04<118:01, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:23,511 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:24,191 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4023/7340 [143:05<117:59, 28.1 steps/min]2025-08-11 17:49:24,852 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:25,542 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:49:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4023/7340 [143:07<118:00, 28.1 steps/min]\u001b[92m17:49:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:28,409 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:31,028 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 55%|█████████████████████-------------------| 4023/7340 [143:12<118:04, 28.1 steps/min]2025-08-11 17:49:31,672 - agent.ComputerAgent - INFO - Computer: double_click({'x': 808, 'y': 289})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 808, 'y': 289})\n",
+ "2025-08-11 17:49:32,317 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m17:49:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:33,724 - agent.ComputerAgent - INFO - Computer: type({'text': '=IF(E11<30,\"Fail\",IF(E11<60,\"Pass\",IF(E11<80,\"Average\",IF(E11<90,\"Above average\",IF(E11<100,\"Excellent\",\"Exceptional!\")))))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=IF(E11<30,\"Fail\",IF(E11<60,\"Pass\",IF(E11<80,\"Average\",IF(E11<90,\"Above average\",IF(E11<100,\"Excellent\",\"Exceptional!\")))))'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4024/7340 [143:16<118:04, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:35,741 - agent.ComputerAgent - INFO - Computer: click({'x': 1013, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1013, 'y': 64})\n",
+ "\u001b[92m17:49:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:36,394 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 429})\n",
+ "\u001b[92m17:49:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 55%|█████████████████████-------------------| 4027/7340 [143:18<117:53, 28.1 steps/min]2025-08-11 17:49:37,071 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:49:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:37,725 - agent.ComputerAgent - INFO - Computer: click({'x': 146, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 146, 'y': 150})\n",
+ " 55%|█████████████████████-------------------| 4029/7340 [143:19<117:47, 28.1 steps/min]2025-08-11 17:49:38,740 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:49:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:39,383 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:49:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4030/7340 [143:21<117:44, 28.1 steps/min]\u001b[92m17:49:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:40,711 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:49:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:41,360 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:49:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:49:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4030/7340 [143:23<117:46, 28.1 steps/min]2025-08-11 17:49:42,011 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:49:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:43,040 - agent.ComputerAgent - INFO - Computer: click({'x': 773, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 773, 'y': 189})\n",
+ " 55%|█████████████████████-------------------| 4031/7340 [143:25<117:44, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:45,240 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m17:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4031/7340 [143:27<117:45, 28.1 steps/min]2025-08-11 17:49:45,871 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:49:46,510 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:49:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4031/7340 [143:28<117:46, 28.1 steps/min]\u001b[92m17:49:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m17:49:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:49,891 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4032/7340 [143:31<117:45, 28.1 steps/min]2025-08-11 17:49:50,911 - agent.ComputerAgent - INFO - Computer: click({'x': 913, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 913, 'y': 428})\n",
+ "\u001b[92m17:49:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:51,591 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 55%|█████████████████████-------------------| 4032/7340 [143:33<117:46, 28.1 steps/min]\u001b[92m17:49:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:49:52,247 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 236})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 236})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:52,872 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:49:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4033/7340 [143:36<117:44, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:54,932 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 128, 'y': 182}, {'x': 322, 'y': 219}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 128, 'y': 182}, {'x': 322, 'y': 219}]})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:55,601 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:49:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|█████████████████████-------------------| 4034/7340 [143:38<117:42, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:56,920 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 37})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 37})\n",
+ "2025-08-11 17:49:57,590 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 749, 'y': 440})\n",
+ "2025-08-11 17:49:58,219 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|█████████████████████-------------------| 4035/7340 [143:40<117:40, 28.1 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:49:58,872 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:49:59,529 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 219})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4037/7340 [143:41<117:33, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/de88dba3-e688-4fae-b983-a0cdeb8ef3c6/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<117:31, 28.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4038/7340 [143:44<117:32, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 17:50:05,035 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4038/7340 [143:46<117:34, 28.1 steps/min]2025-08-11 17:50:05,894 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 17:50:06,591 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:50:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4039/7340 [143:48<117:31, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:07,266 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4039/7340 [143:49<117:32, 28.1 steps/min]2025-08-11 17:50:08,600 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:50:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:10,403 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:50:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73646789-482d-4b1c-8ec1-5a943d563fab/close \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4039/7340 [143:53<117:35, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:50:12,470 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:50:13,762 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4044/7340 [143:55<117:18, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:50:14,456 - agent.ComputerAgent - INFO - Computer: click({'x': 476, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 476, 'y': 436})\n",
+ "2025-08-11 17:50:15,153 - agent.ComputerAgent - INFO - Computer: click({'x': 146, 'y': 509})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 146, 'y': 509})\n",
+ "2025-08-11 17:50:15,833 - agent.ComputerAgent - INFO - Computer: click({'x': 601, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 601, 'y': 194})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:50:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4044/7340 [143:58<117:20, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]2025-08-11 17:50:17,828 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:50:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4047/7340 [143:59<117:09, 28.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/beafa529-961e-4382-b811-5d442e689644/close \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:01<117:11, 28.1 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 28.1 steps/min]\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:03<117:12, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<117:13, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:50:23,592 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:50:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.76s/it]2025-08-11 17:50:25,185 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it] 28.1 steps/min]2025-08-11 17:50:26,105 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:50:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:50:27,582 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]\u001b[92m17:50:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it] 28.1 steps/min]\n",
+ "2025-08-11 17:50:28,790 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:50:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:10<117:18, 28.1 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:11<117:19, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:12<117:20, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c898e6a1-68ea-4822-8d12-52633e08a154/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:14<117:21, 28.1 steps/min]\u001b[92m17:50:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/reset \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:16<117:23, 28.1 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:36,378 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:18<117:24, 28.0 steps/min]2025-08-11 17:50:37,480 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:50:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4047/7340 [144:19<117:25, 28.0 steps/min]2025-08-11 17:50:38,486 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:50:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:40,188 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'delete'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'delete'})\n",
+ " 55%|██████████████████████------------------| 4048/7340 [144:22<117:25, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4048/7340 [144:23<117:25, 28.0 steps/min]2025-08-11 17:50:42,894 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:50:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4048/7340 [144:24<117:26, 28.0 steps/min]\u001b[92m17:50:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:50:44,054 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 180, 'y': 181}, {'x': 226, 'y': 478}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 180, 'y': 181}, {'x': 226, 'y': 478}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4048/7340 [144:26<117:27, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:45,863 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:50:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:28<117:25, 28.0 steps/min]\u001b[92m17:50:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:29<117:26, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:48,866 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:31<117:27, 28.0 steps/min]\u001b[92m17:50:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:50:50,194 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:50:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:51,607 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:50:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:50:52,968 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:35<117:31, 28.0 steps/min]\u001b[92m17:50:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:50:54,914 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:36<117:32, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:38<117:34, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:50:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:40<117:35, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:42<117:36, 28.0 steps/min]\u001b[92m17:51:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:01,800 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 20, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 20, 'y': 105})\n",
+ " 55%|██████████████████████------------------| 4049/7340 [144:43<117:37, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:45<117:35, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:51:04,758 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:47<117:36, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:51:06,165 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:51:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:49<117:38, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:50<117:39, 28.0 steps/min]2025-08-11 17:51:09,914 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:51:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:51<117:40, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:53<117:42, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:55<117:43, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:56<117:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4050/7340 [144:57<117:45, 27.9 steps/min]\u001b[92m17:51:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:16,818 - agent.ComputerAgent - INFO - Computer: click({'x': 873, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 873, 'y': 271})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4051/7340 [144:59<117:43, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4051/7340 [145:02<117:45, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4051/7340 [145:03<117:46, 27.9 steps/min]2025-08-11 17:51:22,624 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:51:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4051/7340 [145:08<117:50, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4051/7340 [145:09<117:51, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4051/7340 [145:12<117:53, 27.9 steps/min]\u001b[92m17:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:51:32,974 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'backspace'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'backspace'})\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:16<117:53, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:51:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:17<117:54, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:18<117:55, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ed59654b-b781-492f-98e6-4799284f5db3/reset \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:19<117:55, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:51:39,827 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:21<117:57, 27.9 steps/min]2025-08-11 17:51:40,493 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4052/7340 [145:24<117:59, 27.9 steps/min]\u001b[92m17:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:51:43,731 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 362})\n",
+ " 55%|██████████████████████------------------| 4053/7340 [145:26<117:57, 27.9 steps/min]\u001b[92m17:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:45,962 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 160})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 160})\n",
+ " 55%|██████████████████████------------------| 4054/7340 [145:28<117:55, 27.9 steps/min]\u001b[92m17:51:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:48,165 - agent.ComputerAgent - INFO - Computer: click({'x': 404, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 404, 'y': 595})\n",
+ " 55%|██████████████████████------------------| 4055/7340 [145:30<117:52, 27.9 steps/min]\u001b[92m17:51:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:49,860 - agent.ComputerAgent - INFO - Computer: click({'x': 813, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 813, 'y': 616})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4055/7340 [145:31<117:53, 27.9 steps/min]2025-08-11 17:51:50,975 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4056/7340 [145:32<117:50, 27.9 steps/min]\u001b[92m17:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:51,665 - agent.ComputerAgent - INFO - Computer: click({'x': 519, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 519, 'y': 104})\n",
+ " 55%|██████████████████████------------------| 4057/7340 [145:33<117:47, 27.9 steps/min]\u001b[92m17:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:53,366 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 95})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4057/7340 [145:35<117:48, 27.9 steps/min]2025-08-11 17:51:54,517 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:51:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4058/7340 [145:36<117:45, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:55,175 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:51:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:51:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:51:55,854 - agent.ComputerAgent - INFO - Computer: click({'x': 148, 'y': 534})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 148, 'y': 534})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:51:57,278 - agent.ComputerAgent - INFO - Computer: type({'text': '-30'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '-30'})\n",
+ " 55%|██████████████████████------------------| 4058/7340 [145:39<117:47, 27.9 steps/min]\u001b[92m17:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:51:57,962 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 91})\n",
+ "2025-08-11 17:51:58,616 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:51:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4060/7340 [145:40<117:41, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:00,027 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4061/7340 [145:41<117:38, 27.9 steps/min]2025-08-11 17:52:00,728 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:52:00,729 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 289})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 289})\n",
+ "\u001b[92m17:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:01,390 - agent.ComputerAgent - INFO - Computer: click({'x': 536, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 536, 'y': 429})\n",
+ " 55%|██████████████████████------------------| 4061/7340 [145:43<117:39, 27.9 steps/min]2025-08-11 17:52:02,075 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4063/7340 [145:44<117:32, 27.9 steps/min]\u001b[92m17:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:03,612 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:52:03,613 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 576})\n",
+ " 55%|██████████████████████------------------| 4063/7340 [145:45<117:33, 27.9 steps/min]\u001b[92m17:52:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:05,194 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 207})\n",
+ " 55%|██████████████████████------------------| 4064/7340 [145:46<117:30, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4065/7340 [145:47<117:27, 27.9 steps/min]\u001b[92m17:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:06,901 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:07,517 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:52:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4065/7340 [145:49<117:28, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:08,882 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:52:08,883 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "\u001b[92m17:52:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:09,508 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:52:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 55%|██████████████████████------------------| 4066/7340 [145:51<117:27, 27.9 steps/min]\u001b[92m17:52:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:10,812 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 371, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 371, 'y': 75})\n",
+ "2025-08-11 17:52:11,432 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:52:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:12,092 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:52:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:12,743 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:52:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:13,418 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 164, 'y': 335}, {'x': 332, 'y': 321}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 164, 'y': 335}, {'x': 332, 'y': 321}]})\n",
+ "\u001b[92m17:52:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:52:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4066/7340 [145:55<117:30, 27.9 steps/min]2025-08-11 17:52:14,783 - agent.ComputerAgent - INFO - Computer: click({'x': 684, 'y': 327})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 684, 'y': 327})\n",
+ "\u001b[92m17:52:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:16,074 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+,'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+,'})\n",
+ "2025-08-11 17:52:16,739 - agent.ComputerAgent - INFO - Computer: click({'x': 71, 'y': 458})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 71, 'y': 458})\n",
+ " 55%|██████████████████████------------------| 4068/7340 [145:58<117:24, 27.9 steps/min]\u001b[92m17:52:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:17,373 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:52:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:18,777 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:19,456 - agent.ComputerAgent - INFO - Computer: click({'x': 166, 'y': 149, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 166, 'y': 149, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:20,800 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 55%|██████████████████████------------------| 4070/7340 [146:02<117:20, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:22,462 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'delete'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'delete'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:23,754 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ " 55%|██████████████████████------------------| 4071/7340 [146:05<117:18, 27.9 steps/min]\u001b[92m17:52:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:24,465 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:25,127 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4072/7340 [146:06<117:15, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:26,806 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4072/7340 [146:09<117:18, 27.9 steps/min]\u001b[92m17:52:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:28,524 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 33})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:29,209 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:52:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 55%|██████████████████████------------------| 4072/7340 [146:10<117:19, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:52:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 55%|██████████████████████------------------| 4073/7340 [146:12<117:16, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:31,053 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 175})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:31,685 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:52:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:32,335 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:52:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 55%|██████████████████████------------------| 4073/7340 [146:14<117:17, 27.9 steps/min]2025-08-11 17:52:33,006 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 402})\n",
+ "2025-08-11 17:52:33,674 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:52:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4074/7340 [146:16<117:15, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:34,995 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:52:35,684 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4075/7340 [146:18<117:13, 27.9 steps/min]\u001b[92m17:52:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:52:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:37,025 - agent.ComputerAgent - INFO - Computer: click({'x': 757, 'y': 121})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 757, 'y': 121})\n",
+ "2025-08-11 17:52:37,651 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:52:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4075/7340 [146:20<117:14, 27.8 steps/min]\u001b[92m17:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:38,933 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 595})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:39,962 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4076/7340 [146:21<117:12, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:40,648 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:41,319 - agent.ComputerAgent - INFO - Computer: click({'x': 274, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 274, 'y': 52})\n",
+ " 56%|██████████████████████------------------| 4077/7340 [146:23<117:09, 27.9 steps/min]2025-08-11 17:52:41,985 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:52:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4078/7340 [146:26<117:07, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4078/7340 [146:27<117:09, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4078/7340 [146:28<117:10, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:47,528 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:48,180 - agent.ComputerAgent - INFO - Computer: click({'x': 379, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 379, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:50,158 - agent.ComputerAgent - INFO - Computer: type({'text': '=IF(VALUE(E11)<30;\"Fail\";IF(VALUE(E11)<60;\"Pass\";IF(VALUE(E11)<80;\"Average\";IF(VALUE(E11)<90;\"Above average\";IF(VALUE(E11)<100;\"Excellent\";\"Exceptional!\")))))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=IF(VALUE(E11)<30;\"Fail\";IF(VALUE(E11)<60;\"Pass\";IF(VALUE(E11)<80;\"Average\";IF(VALUE(E11)<90;\"Above average\";IF(VALUE(E11)<100;\"Excellent\";\"Exceptional!\")))))'})\n",
+ " 56%|██████████████████████------------------| 4078/7340 [146:31<117:12, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:52:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:51,457 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:52:51,457 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 304})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:52,136 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:52:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4080/7340 [146:33<117:06, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:52:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:52:53,498 - agent.ComputerAgent - INFO - Computer: click({'x': 172, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 172, 'y': 150})\n",
+ "2025-08-11 17:52:54,132 - agent.ComputerAgent - INFO - Computer: click({'x': 888, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 888, 'y': 428})\n",
+ " 56%|██████████████████████------------------| 4081/7340 [146:35<117:04, 27.8 steps/min]\u001b[92m17:52:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:52:54,780 - agent.ComputerAgent - INFO - Computer: click({'x': 536, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 536, 'y': 429})\n",
+ "2025-08-11 17:52:55,438 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:52:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4083/7340 [146:37<116:57, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4084/7340 [146:38<116:54, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 56%|██████████████████████------------------| 4084/7340 [146:39<116:55, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:52:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:52:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:52:58,960 - agent.ComputerAgent - INFO - Computer: click({'x': 919, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 919, 'y': 65})\n",
+ " 56%|██████████████████████------------------| 4084/7340 [146:40<116:56, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4085/7340 [146:41<116:53, 27.8 steps/min]2025-08-11 17:53:00,659 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:53:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:53:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:01,338 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 19, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 19, 'y': 238})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4085/7340 [146:43<116:54, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:02,676 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:53:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:05,701 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4086/7340 [146:47<116:54, 27.8 steps/min]\u001b[92m17:53:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:07,039 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 103})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 103})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:08,534 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:53:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:53:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4086/7340 [146:50<116:56, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:09,201 - agent.ComputerAgent - INFO - Computer: click({'x': 287, 'y': 520})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 287, 'y': 520})\n",
+ "\u001b[92m17:53:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:11,384 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:53:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:53:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:12,044 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4087/7340 [146:53<116:55, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:12,724 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:53:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:53:13,375 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 369})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 369})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4088/7340 [146:56<116:53, 27.8 steps/min]\u001b[92m17:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:15,395 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 76})\n",
+ "\u001b[92m17:53:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:16,046 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 273, 'y': 182}, {'x': 189, 'y': 538}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 273, 'y': 182}, {'x': 189, 'y': 538}]})\n",
+ " 56%|██████████████████████------------------| 4089/7340 [146:57<116:50, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:16,725 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c0815010-faa0-495f-a2bd-bca30f9b2c7f/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:17,382 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 418})\n",
+ " 56%|██████████████████████------------------| 4091/7340 [146:59<116:44, 27.8 steps/min]\u001b[92m17:53:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:19,259 - agent.ComputerAgent - INFO - Computer: click({'x': 202, 'y': 320})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 202, 'y': 320})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4092/7340 [147:00<116:41, 27.8 steps/min]2025-08-11 17:53:19,915 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:53:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:21,928 - agent.ComputerAgent - INFO - Computer: type({'text': 'Zurich'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Zurich'})\n",
+ " 56%|██████████████████████------------------| 4093/7340 [147:03<116:39, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:53:22,563 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:53:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it] 27.8 steps/min]2025-08-11 17:53:24,184 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:53:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it] 27.8 steps/min]2025-08-11 17:53:25,774 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:53:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4094/7340 [147:07<116:39, 27.8 steps/min]2025-08-11 17:53:26,843 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:53:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it] 27.8 steps/min]\n",
+ "2025-08-11 17:53:28,013 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:53:28,666 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4094/7340 [147:10<116:41, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:29,995 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4094/7340 [147:11<116:42, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4094/7340 [147:13<116:43, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:31,625 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:32,680 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:34,014 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:35,333 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 56%|██████████████████████------------------| 4094/7340 [147:17<116:46, 27.8 steps/min]2025-08-11 17:53:35,973 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:37,345 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:39,251 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:53:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:53:39,894 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 524})\n",
+ "2025-08-11 17:53:40,569 - agent.ComputerAgent - INFO - Computer: click({'x': 442, 'y': 414})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 442, 'y': 414})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:41,868 - agent.ComputerAgent - INFO - Computer: type({'text': 'Speechify Text to Speech Voice Reader'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Speechify Text to Speech Voice Reader'})\n",
+ " 56%|██████████████████████------------------| 4096/7340 [147:23<116:44, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:43,610 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 207})\n",
+ "2025-08-11 17:53:44,300 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 339, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 339, 'y': 241})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:45,577 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(D2:D12)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(D2:D12)'})\n",
+ " 56%|██████████████████████------------------| 4100/7340 [147:27<116:31, 27.8 steps/min]\u001b[92m17:53:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:46,211 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 460})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 460})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:47,527 - agent.ComputerAgent - INFO - Computer: click({'x': 208, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 208, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:53:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4103/7340 [147:29<116:22, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:53:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:50,216 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 513})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 513})\n",
+ "\u001b[92m17:53:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4105/7340 [147:31<116:15, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:53:50,880 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 429})\n",
+ "\u001b[92m17:53:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:53:51,870 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 404})\n",
+ " 56%|██████████████████████------------------| 4108/7340 [147:36<116:08, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4108/7340 [147:37<116:08, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:57,802 - agent.ComputerAgent - INFO - Computer: type({'text': '-30'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '-30'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4108/7340 [147:39<116:10, 27.8 steps/min]2025-08-11 17:53:58,535 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:53:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:53:59,205 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4109/7340 [147:40<116:07, 27.8 steps/min]2025-08-11 17:53:59,892 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:54:00,554 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:54:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:54:01,198 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4109/7340 [147:43<116:09, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:54:02,555 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:54:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4109/7340 [147:44<116:10, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:54:03,580 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:54:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4109/7340 [147:45<116:11, 27.8 steps/min]2025-08-11 17:54:05,405 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 621})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 621})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4109/7340 [147:47<116:12, 27.8 steps/min]2025-08-11 17:54:06,082 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:54:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4110/7340 [147:49<116:10, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:54:08,248 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4110/7340 [147:50<116:10, 27.8 steps/min]2025-08-11 17:54:08,949 - agent.ComputerAgent - INFO - Computer: click({'x': 105, 'y': 482})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 105, 'y': 482})\n",
+ "2025-08-11 17:54:09,599 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:54:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4110/7340 [147:51<116:11, 27.8 steps/min]2025-08-11 17:54:10,659 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:54:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6543d2df-ad27-4301-babf-39cf80a164f3/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4111/7340 [147:53<116:09, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/faf928a4-5aec-45e2-950a-78588c9a2ff9/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:54:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4111/7340 [147:54<116:10, 27.8 steps/min]2025-08-11 17:54:13,755 - agent.ComputerAgent - INFO - Computer: click({'x': 635, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 635, 'y': 332})\n",
+ "2025-08-11 17:54:14,430 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:54:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4111/7340 [147:56<116:11, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:54:16,296 - agent.ComputerAgent - INFO - Agent: I saved the current page to the Bookmarks bar. You should see “jalammar.gith...” in the bar under the address bar. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I saved the current page to the Bookmarks bar. You should see “jalammar.gith...” in the bar under the address bar. Task completed.\n",
+ "2025-08-11 17:54:16,977 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 232\n",
+ " - prompt_tokens: 8760\n",
+ " - total_tokens: 8992\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0133\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 232\n",
+ " - prompt_tokens: 8760\n",
+ " - total_tokens: 8992\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0133\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af801c79-4573-4b66-93a5-ab02a8ebb316/close \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4113/7340 [148:00<116:07, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.23s/it]2025-08-11 17:54:20,539 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:54:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4113/7340 [148:03<116:09, 27.8 steps/min]\u001b[92m17:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:54:21,920 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:04, 2.08s/it]2025-08-11 17:54:23,344 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 17:54:24,168 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 395\n",
+ " - prompt_tokens: 7457\n",
+ " - total_tokens: 7852\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 5248\n",
+ " - response_cost: $0.0074\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.89s/it]INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 395\n",
+ " - prompt_tokens: 7457\n",
+ " - total_tokens: 7852\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 5248\n",
+ " - response_cost: $0.0074\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.61s/it] 27.8 steps/min]\n",
+ "2025-08-11 17:54:25,409 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:54:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:54:27,120 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'delete'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'delete'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4115/7340 [148:09<116:06, 27.8 steps/min]\u001b[92m17:54:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.37s/it] 27.8 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a1613365-876e-432c-9025-bb7d464c9014/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:54:30,450 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:54:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4126/7340 [148:12<115:26, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4126/7340 [148:13<115:27, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4126/7340 [148:14<115:28, 27.8 steps/min]2025-08-11 17:54:33,350 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:54:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.70s/it] 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4126/7340 [148:16<115:29, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 27.8 steps/min]\n",
+ " 56%|██████████████████████------------------| 4126/7340 [148:20<115:33, 27.8 steps/min]\u001b[92m17:54:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:54:39,901 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 527})\n",
+ " 56%|██████████████████████------------------| 4127/7340 [148:22<115:30, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:54:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:54:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:54:42,247 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 303})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 303})\n",
+ " 56%|██████████████████████------------------| 4128/7340 [148:25<115:29, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:54:46,325 - agent.ComputerAgent - INFO - Computer: click({'x': 235, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 235, 'y': 150})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4128/7340 [148:28<115:31, 27.8 steps/min]2025-08-11 17:54:47,638 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:54:48,311 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 175})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4129/7340 [148:30<115:29, 27.8 steps/min]2025-08-11 17:54:48,991 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:54:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4141/7340 [148:31<114:43, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5d9c8cb2-0fe5-4734-b73e-fffbf15d315b/close \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4141/7340 [148:32<114:44, 27.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4141/7340 [148:33<114:45, 27.9 steps/min]\u001b[92m17:54:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:54:52,897 - agent.ComputerAgent - INFO - Computer: click({'x': 578, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 578, 'y': 446})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 56%|██████████████████████------------------| 4141/7340 [148:35<114:47, 27.9 steps/min]\u001b[92m17:54:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:54:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:54:54,220 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 430})\n",
+ "\u001b[92m17:54:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:54:54,883 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 298})\n",
+ " 56%|██████████████████████------------------| 4142/7340 [148:36<114:44, 27.9 steps/min]2025-08-11 17:54:55,818 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]\u001b[92m17:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:37<114:37, 27.9 steps/min]2025-08-11 17:54:56,993 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:54:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:38<114:38, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:39<114:39, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.77s/it]\u001b[92m17:54:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:41<114:40, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.50s/it]\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:42<114:41, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:01,621 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:55:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:43<114:42, 27.9 steps/min]2025-08-11 17:55:02,316 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:03,522 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4144/7340 [148:45<114:43, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:04,585 - agent.ComputerAgent - INFO - Computer: double_click({'x': 381, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 381, 'y': 276})\n",
+ " 56%|██████████████████████------------------| 4145/7340 [148:51<114:44, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4145/7340 [148:52<114:45, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:12,727 - agent.ComputerAgent - INFO - Computer: type({'text': 'Lisp'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Lisp'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:13,978 - agent.ComputerAgent - INFO - Computer: type({'text': \"find . -type f -name '*.php' -print0 | xargs -0 cat | wc -l\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"find . -type f -name '*.php' -print0 | xargs -0 cat | wc -l\"})\n",
+ " 56%|██████████████████████------------------| 4145/7340 [148:55<114:47, 27.8 steps/min]2025-08-11 17:55:14,598 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:55:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 56%|██████████████████████------------------| 4147/7340 [148:56<114:40, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 56%|██████████████████████------------------| 4147/7340 [148:57<114:41, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:55:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:16,883 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 354})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 354})\n",
+ " 56%|██████████████████████------------------| 4147/7340 [148:58<114:42, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4148/7340 [149:00<114:39, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4148/7340 [149:01<114:40, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:20,776 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:55:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4148/7340 [149:03<114:42, 27.8 steps/min]\u001b[92m17:55:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:55:22,973 - agent.ComputerAgent - INFO - Computer: click({'x': 759, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 759, 'y': 275})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e4511ed1-a184-44ef-9245-68929a78fe33/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:24,319 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 57%|██████████████████████------------------| 4148/7340 [149:06<114:44, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:55:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:55:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:55:25,675 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:55:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:55:26,337 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 489})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 489})\n",
+ " 57%|██████████████████████------------------| 4149/7340 [149:08<114:41, 27.8 steps/min]2025-08-11 17:55:26,946 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:55:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4150/7340 [149:09<114:38, 27.8 steps/min]\u001b[92m17:55:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:28,123 - agent.ComputerAgent - INFO - Computer: click({'x': 251, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 251, 'y': 504})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:29,433 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f -name \"*.php\" -print0 | xargs -0 wc -l | tail -n1 | awk \\'{print $1}\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f -name \"*.php\" -print0 | xargs -0 wc -l | tail -n1 | awk \\'{print $1}\\''})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4150/7340 [149:11<114:40, 27.8 steps/min]2025-08-11 17:55:30,075 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:55:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:55:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:30,748 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 298})\n",
+ " 57%|██████████████████████------------------| 4152/7340 [149:12<114:33, 27.8 steps/min]2025-08-11 17:55:31,790 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:55:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4153/7340 [149:13<114:30, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:32,979 - agent.ComputerAgent - INFO - Computer: click({'x': 702, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 702, 'y': 133})\n",
+ " 57%|██████████████████████------------------| 4153/7340 [149:14<114:31, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:34,157 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:55:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4154/7340 [149:15<114:28, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4154/7340 [149:17<114:29, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:55:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:55:35,992 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 151, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 151, 'button': 'left'})\n",
+ " 57%|██████████████████████------------------| 4154/7340 [149:18<114:30, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:55:37,811 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:55:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd363353-8e00-4f7c-8dc1-8ffbaeac116b/invoke \"HTTP/1.1 500 Internal Server Error\"\n",
+ " 57%|██████████████████████------------------| 4155/7340 [149:19<114:27, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:55:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:55:39,141 - agent.ComputerAgent - INFO - Computer: double_click({'x': 440, 'y': 434})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 440, 'y': 434})\n",
+ "2025-08-11 17:55:39,841 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:55:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:55:41,152 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4175/7340 [149:22<113:14, 27.9 steps/min]\u001b[92m17:55:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:55:41,776 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:55:41,778 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 225})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 225})\n",
+ "2025-08-11 17:55:42,441 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:55:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4176/7340 [149:24<113:11, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 17:55:43,796 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:25<113:01, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/448620aa-cee2-4394-81f2-d8efa1937c36/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]2025-08-11 17:55:45,435 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:55:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:55:46,067 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:55:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:27<113:03, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:28<113:04, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.61s/it]\u001b[92m17:55:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:30<113:05, 28.0 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]\n",
+ "2025-08-11 17:55:49,533 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:55:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:31<113:05, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:32<113:06, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\n",
+ "2025-08-11 17:55:52,679 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.94s/it]INFO:agent.ComputerAgent:LLM processing started with 16 messages\u001b[92m17:55:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:34<113:08, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4179/7340 [149:35<113:09, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.51s/it]\n",
+ "2025-08-11 17:55:55,811 - agent.ComputerAgent - INFO - Computer: type({'text': 'Tamiflu side effects'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Tamiflu side effects'})\n",
+ " 57%|██████████████████████------------------| 4180/7340 [149:38<113:07, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:55:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4180/7340 [149:40<113:08, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4180/7340 [149:41<113:09, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4180/7340 [149:42<113:10, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:56:01,361 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 440})\n",
+ " 57%|██████████████████████------------------| 4180/7340 [149:43<113:11, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:56:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4181/7340 [149:45<113:09, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:04,994 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 530, 'scroll_x': 0, 'x': 345, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 530, 'scroll_x': 0, 'x': 345, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:07,009 - agent.ComputerAgent - INFO - Computer: type({'text': '-30'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '-30'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:08,852 - agent.ComputerAgent - INFO - Computer: type({'text': 'rulers'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'rulers'})\n",
+ " 57%|██████████████████████------------------| 4181/7340 [149:50<113:12, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:10,573 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 195})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:11,999 - agent.ComputerAgent - INFO - Computer: click({'x': 653, 'y': 303})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 653, 'y': 303})\n",
+ "\u001b[92m17:56:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:56:12,679 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 234})\n",
+ " 57%|██████████████████████------------------| 4184/7340 [149:54<113:04, 27.9 steps/min]2025-08-11 17:56:13,638 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 65})\n",
+ "2025-08-11 17:56:14,733 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 152})\n",
+ "\u001b[92m17:56:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4187/7340 [149:56<112:54, 27.9 steps/min]2025-08-11 17:56:15,442 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:56:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:18,035 - agent.ComputerAgent - INFO - Computer: click({'x': 593, 'y': 30})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 593, 'y': 30})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4189/7340 [149:59<112:49, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:19,316 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 150})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/070faeba-5155-485d-a1b3-4e3e06d3da71/close \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4195/7340 [150:02<112:28, 28.0 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4195/7340 [150:03<112:29, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/reset \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4195/7340 [150:04<112:30, 28.0 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:24,368 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4195/7340 [150:06<112:31, 27.9 steps/min]2025-08-11 17:56:25,033 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:25,695 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:07<112:29, 28.0 steps/min]2025-08-11 17:56:26,342 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:56:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:27,033 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:27,713 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:09<112:30, 27.9 steps/min]2025-08-11 17:56:28,403 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:56:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:29,084 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:11<112:31, 27.9 steps/min]2025-08-11 17:56:29,744 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:30,413 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:56:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:12<112:32, 27.9 steps/min]2025-08-11 17:56:31,080 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:56:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:56:31,726 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:56:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:13<112:33, 27.9 steps/min]2025-08-11 17:56:32,373 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:56:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:33,032 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:17<112:36, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:37,443 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:56:37,444 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:19<112:37, 27.9 steps/min]2025-08-11 17:56:38,632 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:21<112:39, 27.9 steps/min]\u001b[92m17:56:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:23<112:40, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:24<112:42, 27.9 steps/min]\u001b[92m17:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:44,521 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4196/7340 [150:26<112:43, 27.9 steps/min]\u001b[92m17:56:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it] 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:56:51,546 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:56:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4197/7340 [150:33<112:44, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it] 27.9 steps/min]\n",
+ " 57%|██████████████████████------------------| 4197/7340 [150:35<112:46, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 57%|██████████████████████------------------| 4197/7340 [150:36<112:47, 27.9 steps/min]\u001b[92m17:56:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:55,461 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 443})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 443})\n",
+ "\u001b[92m17:56:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:56,105 - agent.ComputerAgent - INFO - Computer: click({'x': 356, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 356, 'y': 627})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4197/7340 [150:38<112:48, 27.9 steps/min]\u001b[92m17:56:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:56:57,381 - agent.ComputerAgent - INFO - Computer: click({'x': 647, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 647, 'y': 525})\n",
+ "\u001b[92m17:56:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:56:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:56:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4199/7340 [150:39<112:42, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:56:58,682 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 504})\n",
+ "\u001b[92m17:56:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:56:59,333 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 274, 'y': 193}, {'x': 210, 'y': 537}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 274, 'y': 193}, {'x': 210, 'y': 537}]})\n",
+ "\u001b[92m17:56:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4200/7340 [150:41<112:39, 27.9 steps/min]2025-08-11 17:56:59,973 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 574})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 574})\n",
+ " 57%|██████████████████████------------------| 4202/7340 [150:42<112:32, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4203/7340 [150:43<112:29, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:03,327 - agent.ComputerAgent - INFO - Computer: type({'text': 'find . -type f -name \"*.php\" -exec cat {} + | wc -l'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find . -type f -name \"*.php\" -exec cat {} + | wc -l'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4203/7340 [150:45<112:31, 27.9 steps/min]\u001b[92m17:57:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:57:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:05,154 - agent.ComputerAgent - INFO - Computer: click({'x': 157, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 157, 'y': 62})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4204/7340 [150:46<112:28, 27.9 steps/min]2025-08-11 17:57:05,844 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:57:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:57:06,514 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:57:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4205/7340 [150:48<112:25, 27.9 steps/min]2025-08-11 17:57:07,167 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:57:07,844 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4205/7340 [150:49<112:26, 27.9 steps/min]2025-08-11 17:57:08,513 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:57:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4205/7340 [150:50<112:27, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:09,825 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:10,494 - agent.ComputerAgent - INFO - Computer: click({'x': 717, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 717, 'y': 74})\n",
+ " 57%|██████████████████████------------------| 4205/7340 [150:52<112:28, 27.9 steps/min]2025-08-11 17:57:11,121 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:12,463 - agent.ComputerAgent - INFO - Computer: click({'x': 204, 'y': 233, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 204, 'y': 233, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4206/7340 [150:54<112:26, 27.9 steps/min]2025-08-11 17:57:13,096 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:57:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4207/7340 [150:56<112:24, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:15,265 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:57:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4207/7340 [150:57<112:25, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:17,059 - agent.ComputerAgent - INFO - Computer: click({'x': 357, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 357, 'y': 627})\n",
+ " 57%|██████████████████████------------------| 4207/7340 [150:58<112:26, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:18,317 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:57:18,319 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4208/7340 [151:00<112:23, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:19,675 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:57:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:57:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:20,368 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 689})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 689})\n",
+ " 57%|██████████████████████------------------| 4208/7340 [151:02<112:24, 27.9 steps/min]2025-08-11 17:57:21,025 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:57:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4209/7340 [151:03<112:22, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:22,724 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:57:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:24,414 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'delete'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'delete'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4209/7340 [151:06<112:24, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:26,101 - agent.ComputerAgent - INFO - Computer: click({'x': 907, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 907, 'y': 428})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:27,785 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:29,790 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ " 57%|██████████████████████------------------| 4210/7340 [151:11<112:24, 27.8 steps/min]2025-08-11 17:57:30,469 - agent.ComputerAgent - INFO - Computer: click({'x': 593, 'y': 30})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 593, 'y': 30})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:31,817 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://www.delta.com/flight-search/book-a-flight'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://www.delta.com/flight-search/book-a-flight'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:57:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:33,165 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 57%|██████████████████████------------------| 4213/7340 [151:15<112:16, 27.9 steps/min]2025-08-11 17:57:34,518 - agent.ComputerAgent - INFO - Computer: click({'x': 649, 'y': 494})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 649, 'y': 494})\n",
+ "\u001b[92m17:57:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:35,807 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 183, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 183, 'button': 'left'})\n",
+ "2025-08-11 17:57:36,479 - agent.ComputerAgent - INFO - Computer: click({'x': 277, 'y': 198})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 277, 'y': 198})\n",
+ " 57%|██████████████████████------------------| 4215/7340 [151:18<112:10, 27.9 steps/min]\u001b[92m17:57:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:37,121 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ " 57%|██████████████████████------------------| 4218/7340 [151:19<112:00, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 17:57:38,771 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:57:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4219/7340 [151:20<111:57, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 57%|██████████████████████------------------| 4219/7340 [151:22<111:58, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4219/7340 [151:23<111:59, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:42,500 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:57:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:43,190 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:57:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:57:43,830 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:57:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:57:44,501 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:57:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 57%|██████████████████████------------------| 4219/7340 [151:26<112:01, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:46,200 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:47,536 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 232, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 232, 'button': 'left'})\n",
+ " 57%|██████████████████████------------------| 4219/7340 [151:29<112:03, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:48,834 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "2025-08-11 17:57:49,501 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:57:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4221/7340 [151:31<111:57, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:50,821 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 17:57:51,442 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:57:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4221/7340 [151:33<111:59, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:52,761 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:57:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:57:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:53,419 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:57:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:57:54,062 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 512, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 512, 'y': 386})\n",
+ " 58%|███████████████████████-----------------| 4222/7340 [151:35<111:57, 27.9 steps/min]2025-08-11 17:57:54,732 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:57:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:57:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4223/7340 [151:37<111:54, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:57:56,080 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:57:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:57:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:57:56,856 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ " 58%|███████████████████████-----------------| 4223/7340 [151:38<111:55, 27.8 steps/min]2025-08-11 17:57:57,970 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:57:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:39<111:52, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:57:59,165 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:40<111:53, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:00,360 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:58:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:42<111:54, 27.8 steps/min]2025-08-11 17:58:01,380 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:58:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:44<111:56, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:03,605 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:58:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:45<111:56, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:04,606 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:58:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:07,737 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4224/7340 [151:50<112:00, 27.8 steps/min]\u001b[92m17:58:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:58:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:09,581 - agent.ComputerAgent - INFO - Computer: click({'x': 294, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 294, 'y': 250})\n",
+ " 58%|███████████████████████-----------------| 4225/7340 [151:51<111:57, 27.8 steps/min]\u001b[92m17:58:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:10,272 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:58:10,272 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 211})\n",
+ "\u001b[92m17:58:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:10,907 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 183, 'y': 193}, {'x': 209, 'y': 537}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 183, 'y': 193}, {'x': 209, 'y': 537}]})\n",
+ " 58%|███████████████████████-----------------| 4226/7340 [151:52<111:54, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:12,196 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://drugs.com/sfx/tamiflu-side-effects.html'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://drugs.com/sfx/tamiflu-side-effects.html'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4228/7340 [151:53<111:48, 27.8 steps/min]2025-08-11 17:58:12,851 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:58:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4234/7340 [151:54<111:26, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03201a42-df17-4896-9367-120fd49d3bb7/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:14,671 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:58:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4234/7340 [151:57<111:28, 27.9 steps/min]\u001b[92m17:58:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:17,361 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:58:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4234/7340 [151:59<111:29, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4234/7340 [152:00<111:30, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it] 27.9 steps/min]2025-08-11 17:58:20,861 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:58:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4234/7340 [152:02<111:32, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]2025-08-11 17:58:23,369 - agent.ComputerAgent - INFO - Computer: type({'text': \"grep -r --include='*.php' -n '^' . | wc -l\"})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]INFO:agent.ComputerAgent:Computer: type({'text': \"grep -r --include='*.php' -n '^' . | wc -l\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 27.8 steps/min]\n",
+ "2025-08-11 17:58:24,022 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4235/7340 [152:06<111:31, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:25,609 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m17:58:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:26,923 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 276, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 276, 'button': 'left'})\n",
+ "\u001b[92m17:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:27,572 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 58%|███████████████████████-----------------| 4235/7340 [152:09<111:33, 27.8 steps/min]\u001b[92m17:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:28,243 - agent.ComputerAgent - INFO - Computer: click({'x': 587, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 587, 'y': 504})\n",
+ "2025-08-11 17:58:28,918 - agent.ComputerAgent - INFO - Computer: click({'x': 637, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 637, 'y': 332})\n",
+ "\u001b[92m17:58:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:58:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:29,573 - agent.ComputerAgent - INFO - Computer: click({'x': 837, 'y': 514})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 837, 'y': 514})\n",
+ "2025-08-11 17:58:30,271 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4236/7340 [152:12<111:32, 27.8 steps/min]\u001b[92m17:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:31,563 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:58:31,564 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 332})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:33,541 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 17:58:34,192 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 6716\n",
+ " - total_tokens: 6919\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0104\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 6716\n",
+ " - total_tokens: 6919\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0104\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4241/7340 [152:16<111:16, 27.9 steps/min]\u001b[92m17:58:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:35,549 - agent.ComputerAgent - INFO - Computer: click({'x': 150, 'y': 468})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 150, 'y': 468})\n",
+ "\u001b[92m17:58:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:58:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:36,843 - agent.ComputerAgent - INFO - Computer: click({'x': 147, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 147, 'y': 64})\n",
+ "\u001b[92m17:58:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4242/7340 [152:18<111:14, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:37,531 - agent.ComputerAgent - INFO - Computer: click({'x': 693, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 693, 'y': 130})\n",
+ "\u001b[92m17:58:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:38,182 - agent.ComputerAgent - INFO - Computer: click({'x': 708, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 708, 'y': 154})\n",
+ " 58%|███████████████████████-----------------| 4244/7340 [152:19<111:07, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4246/7340 [152:21<111:01, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:58:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:40,559 - agent.ComputerAgent - INFO - Computer: click({'x': 507, 'y': 513})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 507, 'y': 513})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4246/7340 [152:22<111:01, 27.9 steps/min]2025-08-11 17:58:41,221 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:58:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:58:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4247/7340 [152:23<110:59, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:58:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:43,362 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 17:58:43,363 - agent.ComputerAgent - INFO - Computer: double_click({'x': 325, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 325, 'y': 467})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4247/7340 [152:25<111:00, 27.9 steps/min]2025-08-11 17:58:43,988 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m17:58:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:44,652 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m17:58:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:45,342 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:58:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:46,003 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4248/7340 [152:27<110:58, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:58:47,690 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 180, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 180, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4248/7340 [152:30<111:00, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:49,002 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:58:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:58:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:58:50,418 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:51,083 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 248})\n",
+ " 58%|███████████████████████-----------------| 4249/7340 [152:32<110:58, 27.9 steps/min]2025-08-11 17:58:51,725 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:58:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:52,391 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:58:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4250/7340 [152:34<110:55, 27.9 steps/min]2025-08-11 17:58:53,023 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m17:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:58:53,702 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4250/7340 [152:36<110:57, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:58:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4250/7340 [152:37<110:58, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:58:57,235 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://www.united.com/'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://www.united.com/'})\n",
+ "\u001b[92m17:58:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4250/7340 [152:38<110:59, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:00,805 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:59:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:01,455 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:02,780 - agent.ComputerAgent - INFO - Computer: type({'text': 'Civil Division forms'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Civil Division forms'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4251/7340 [152:44<110:59, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4266/7340 [152:46<110:05, 27.9 steps/min]2025-08-11 17:59:05,831 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m17:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:06,634 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:59:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:07,300 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 149})\n",
+ "\u001b[92m17:59:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4266/7340 [152:49<110:07, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:59:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:08,379 - agent.ComputerAgent - INFO - Computer: double_click({'x': 376, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 376, 'y': 276})\n",
+ "2025-08-11 17:59:09,117 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 373})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 373})\n",
+ " 58%|███████████████████████-----------------| 4269/7340 [152:51<109:58, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4511ed1-a184-44ef-9245-68929a78fe33/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4269/7340 [152:54<110:00, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:59:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:14,172 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9eca68f3-1fb6-46dd-892a-b3289bcd816c/close \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4271/7340 [152:55<109:53, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:15,476 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4272/7340 [152:57<109:50, 27.9 steps/min]2025-08-11 17:59:16,119 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m17:59:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:16,764 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m17:59:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:17,448 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4272/7340 [152:59<109:52, 27.9 steps/min]\u001b[92m17:59:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:19,496 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:20,859 - agent.ComputerAgent - INFO - Computer: click({'x': 381, 'y': 433, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 381, 'y': 433, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m17:59:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4272/7340 [153:03<109:55, 27.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:22,501 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:59:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]\u001b[92m17:59:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it] 27.9 steps/min]2025-08-11 17:59:25,129 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m17:59:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4274/7340 [153:06<109:50, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4274/7340 [153:08<109:51, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 17:59:27,585 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4274/7340 [153:09<109:52, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:28,222 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m17:59:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:28,858 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:59:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:59:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4274/7340 [153:10<109:53, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:29,498 - agent.ComputerAgent - INFO - Computer: click({'x': 272, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 272, 'y': 60})\n",
+ "\u001b[92m17:59:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:30,152 - agent.ComputerAgent - INFO - Computer: click({'x': 698, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 698, 'y': 135})\n",
+ "\u001b[92m17:59:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4274/7340 [153:11<109:53, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:59:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:30,820 - agent.ComputerAgent - INFO - Computer: click({'x': 772, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 772, 'y': 154})\n",
+ "2025-08-11 17:59:31,470 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 196})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 196})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:32,807 - agent.ComputerAgent - INFO - Computer: type({'text': '2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4276/7340 [153:15<109:48, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:59:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m17:59:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:35,507 - agent.ComputerAgent - INFO - Computer: double_click({'x': 437, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 437, 'y': 305})\n",
+ "\u001b[92m17:59:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4279/7340 [153:17<109:39, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:36,206 - agent.ComputerAgent - INFO - Computer: click({'x': 592, 'y': 126})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 592, 'y': 126})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m17:59:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 58%|███████████████████████-----------------| 4280/7340 [153:18<109:36, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:37,508 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 248})\n",
+ "\u001b[92m17:59:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:38,153 - agent.ComputerAgent - INFO - Computer: click({'x': 247, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 247, 'y': 410})\n",
+ " 58%|███████████████████████-----------------| 4281/7340 [153:19<109:33, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:39,499 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m17:59:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4283/7340 [153:21<109:27, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:59:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:40,153 - agent.ComputerAgent - INFO - Computer: click({'x': 266, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 266, 'y': 298})\n",
+ "2025-08-11 17:59:40,837 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:59:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4284/7340 [153:23<109:25, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:43,006 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m17:59:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4284/7340 [153:24<109:26, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:43,697 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:59:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 17:59:47,337 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m17:59:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4284/7340 [153:29<109:29, 27.9 steps/min]\u001b[92m17:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:48,699 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m17:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:49,377 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m17:59:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:50,034 - agent.ComputerAgent - INFO - Computer: click({'x': 584, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 584, 'y': 504})\n",
+ "2025-08-11 17:59:50,709 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m17:59:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:51,378 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m17:59:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4284/7340 [153:33<109:32, 27.9 steps/min]2025-08-11 17:59:52,069 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m17:59:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 17:59:53,126 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m17:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4285/7340 [153:34<109:29, 27.9 steps/min]2025-08-11 17:59:53,767 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m17:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 58%|███████████████████████-----------------| 4285/7340 [153:39<109:32, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m17:59:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4285/7340 [153:40<109:33, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 17:59:59,189 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m17:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m17:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 17:59:59,880 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 741})\n",
+ " 58%|███████████████████████-----------------| 4285/7340 [153:41<109:34, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:01,197 - agent.ComputerAgent - INFO - Agent: Total lines across all PHP files (recursively) displayed on the terminal: 54\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Total lines across all PHP files (recursively) displayed on the terminal: 54\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:00:01,857 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 349\n",
+ " - prompt_tokens: 10150\n",
+ " - total_tokens: 10499\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0162\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 349\n",
+ " - prompt_tokens: 10150\n",
+ " - total_tokens: 10499\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0162\n",
+ " 58%|███████████████████████-----------------| 4287/7340 [153:44<109:29, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 58%|███████████████████████-----------------| 4287/7340 [153:45<109:30, 27.9 steps/min]\u001b[92m18:00:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:00:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:00:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:05,229 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 195, 'y': 193}, {'x': 209, 'y': 223}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 195, 'y': 193}, {'x': 209, 'y': 223}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 58%|███████████████████████-----------------| 4287/7340 [153:47<109:31, 27.9 steps/min]\u001b[92m18:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:00:07,207 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ "\u001b[92m18:00:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:08,524 - agent.ComputerAgent - INFO - Computer: type({'text': 'SEA'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'SEA'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:09,870 - agent.ComputerAgent - INFO - Computer: keykeypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keykeypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 18:00:09,871 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: keykeypress\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:00:11,171 - agent.ComputerAgent - INFO - Computer: click({'x': 405, 'y': 435, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 405, 'y': 435, 'button': 'left'})\n",
+ " 58%|███████████████████████-----------------| 4288/7340 [153:52<109:31, 27.9 steps/min]2025-08-11 18:00:11,814 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 409})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 409})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:13,761 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4299/7340 [153:55<108:53, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:00:14,442 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:00:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:00:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:15,105 - agent.ComputerAgent - INFO - Computer: click({'x': 375, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 375, 'y': 209})\n",
+ "\u001b[92m18:00:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4300/7340 [153:57<108:50, 27.9 steps/min]\u001b[92m18:00:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:16,466 - agent.ComputerAgent - INFO - Computer: click({'x': 324, 'y': 330})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 324, 'y': 330})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 59%|███████████████████████-----------------| 4302/7340 [153:58<108:44, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:00:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:00:18,300 - agent.ComputerAgent - INFO - Computer: click({'x': 646, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 646, 'y': 525})\n",
+ "\u001b[92m18:00:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ed59654b-b781-492f-98e6-4799284f5db3/close \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4303/7340 [154:00<108:41, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:18,974 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4304/7340 [154:01<108:38, 27.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:03<108:36, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:22,340 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:00:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:22,978 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:00:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:04<108:37, 27.9 steps/min]2025-08-11 18:00:23,669 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:00:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:00:25,169 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:00:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:06<108:39, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:08<108:40, 27.9 steps/min]2025-08-11 18:00:28,200 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:00:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:09<108:41, 27.9 steps/min]2025-08-11 18:00:28,876 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:00:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:00:30,226 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:00:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:11<108:42, 27.9 steps/min]2025-08-11 18:00:31,812 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:00:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/749cb05b-d08c-4e9f-929b-3504313826a5/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:33,154 - agent.ComputerAgent - INFO - Computer: type({'text': ' word wrap column'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' word wrap column'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4305/7340 [154:14<108:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4306/7340 [154:16<108:42, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4306/7340 [154:18<108:43, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]2025-08-11 18:00:37,570 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:00:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4306/7340 [154:19<108:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]\u001b[92m18:00:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4306/7340 [154:20<108:45, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4306/7340 [154:21<108:45, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.64s/it] 27.9 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/reset \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4307/7340 [154:24<108:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:44,129 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4307/7340 [154:25<108:45, 27.9 steps/min]2025-08-11 18:00:44,796 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:00:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:00:45,646 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:00:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4308/7340 [154:28<108:43, 27.9 steps/min]\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00, 2.03s/it] \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4308/7340 [154:30<108:44, 27.9 steps/min]\u001b[92m18:00:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:00:49,660 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 182, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 182, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:08<00:07, 3.78s/it]\u001b[92m18:00:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4309/7340 [154:33<108:42, 27.9 steps/min]2025-08-11 18:00:52,038 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:00:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:10<00:00, 2.58s/it] 27.9 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4310/7340 [154:35<108:40, 27.9 steps/min]2025-08-11 18:00:53,732 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:00:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4310/7340 [154:36<108:41, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4310/7340 [154:38<108:42, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.27s/it]\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:00:58,613 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:00:58,615 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'print'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'print'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:00:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4310/7340 [154:40<108:44, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:00:59,439 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 321})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:00,157 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:01:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:01,532 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ " 59%|███████████████████████-----------------| 4314/7340 [154:43<108:31, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:02,205 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 674})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 674})\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:02,846 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:03,529 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 504})\n",
+ "2025-08-11 18:01:04,151 - agent.ComputerAgent - INFO - Computer: click({'x': 451, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 451, 'y': 241})\n",
+ "2025-08-11 18:01:04,799 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:01:04,800 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ "2025-08-11 18:01:05,440 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 238})\n",
+ "2025-08-11 18:01:06,083 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 623, 'x': 362, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 623, 'x': 362, 'y': 446})\n",
+ " 59%|███████████████████████-----------------| 4316/7340 [154:47<108:27, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4321/7340 [154:48<108:09, 27.9 steps/min]\u001b[92m18:01:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:07,875 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 410})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0eeec537-c268-4581-b4ed-23eea7ab177f/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:09,865 - agent.ComputerAgent - INFO - Computer: click({'x': 340, 'y': 462, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 340, 'y': 462, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4321/7340 [154:51<108:11, 27.9 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:01:10,459 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4323/7340 [154:52<108:05, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:12,265 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:01:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4323/7340 [154:54<108:06, 27.9 steps/min]2025-08-11 18:01:12,915 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:01:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:01:13,599 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:01:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4324/7340 [154:55<108:03, 27.9 steps/min]2025-08-11 18:01:14,249 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:01:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:14,888 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:01:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4324/7340 [154:56<108:04, 27.9 steps/min]2025-08-11 18:01:15,535 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:01:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4324/7340 [154:57<108:05, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:16,716 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:01:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4324/7340 [154:58<108:05, 27.9 steps/min]2025-08-11 18:01:17,393 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:01:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:18,030 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:01:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<108:06, 27.9 steps/min]2025-08-11 18:01:18,715 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:01:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4324/7340 [155:00<108:07, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it] 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b01cd4a6-3203-476b-8ece-c651b889f821/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:21,454 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:01:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it] 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:22,599 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:01:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4325/7340 [155:05<108:06, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4325/7340 [155:07<108:08, 27.9 steps/min]\u001b[92m18:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:26,447 - agent.ComputerAgent - INFO - Computer: click({'x': 654, 'y': 495})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 654, 'y': 495})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:27,069 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:01:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4326/7340 [155:08<108:05, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:27,743 - agent.ComputerAgent - INFO - Computer: click({'x': 359, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 359, 'y': 301})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:29,067 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:01:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4327/7340 [155:10<108:03, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:01:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:30,201 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 112})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 112})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4328/7340 [155:12<108:00, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:31,508 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 382, 'y': 286}, {'x': 382, 'y': 286}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 382, 'y': 286}, {'x': 382, 'y': 286}]})\n",
+ "\u001b[92m18:01:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:32,187 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 503, 'y': 182}, {'x': 448, 'y': 538}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 503, 'y': 182}, {'x': 448, 'y': 538}]})\n",
+ "\u001b[92m18:01:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:33,514 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 59%|███████████████████████-----------------| 4330/7340 [155:15<107:55, 27.9 steps/min]2025-08-11 18:01:34,212 - agent.ComputerAgent - INFO - Computer: click({'x': 830, 'y': 514})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 830, 'y': 514})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:34,887 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:01:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4333/7340 [155:16<107:45, 27.9 steps/min]2025-08-11 18:01:35,538 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:01:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:36,177 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:38,772 - agent.ComputerAgent - INFO - Computer: type({'text': 'Times New Roman'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Times New Roman'})\n",
+ " 59%|███████████████████████-----------------| 4334/7340 [155:20<107:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:01:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:40,070 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:40,741 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 351})\n",
+ " 59%|███████████████████████-----------------| 4335/7340 [155:22<107:42, 27.9 steps/min]2025-08-11 18:01:41,401 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:01:41,402 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 249})\n",
+ "\u001b[92m18:01:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:42,070 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:01:42,070 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 91})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:01:42,703 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:01:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4337/7340 [155:24<107:36, 27.9 steps/min]2025-08-11 18:01:43,889 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:01:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4339/7340 [155:25<107:29, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:01:44,529 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:01:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4339/7340 [155:26<107:30, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:46,363 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:01:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4339/7340 [155:28<107:31, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:46,998 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:01:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:48,448 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4340/7340 [155:30<107:29, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:49,742 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:01:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:01:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:50,384 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:01:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:51,062 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:01:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:51,727 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:01:51,727 - agent.ComputerAgent - INFO - Computer: double_click({'x': 960, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 960, 'y': 713})\n",
+ " 59%|███████████████████████-----------------| 4341/7340 [155:33<107:28, 27.9 steps/min]2025-08-11 18:01:52,401 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:01:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4342/7340 [155:35<107:25, 27.9 steps/min]\u001b[92m18:01:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:55,084 - agent.ComputerAgent - INFO - Computer: type({'text': 'about:profiles'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'about:profiles'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:01:55,729 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 299})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 299})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4342/7340 [155:37<107:27, 27.9 steps/min]\u001b[92m18:01:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:01:56,394 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:01:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:01:57,052 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ " 59%|███████████████████████-----------------| 4344/7340 [155:38<107:20, 27.9 steps/min]2025-08-11 18:01:57,759 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:01:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4345/7340 [155:39<107:17, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4345/7340 [155:41<107:18, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:00,141 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:02:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:02:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:01,172 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 525})\n",
+ " 59%|███████████████████████-----------------| 4346/7340 [155:42<107:16, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4347/7340 [155:44<107:13, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:02,991 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:02:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:05,057 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 325})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 325})\n",
+ "2025-08-11 18:02:05,707 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:02:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4347/7340 [155:47<107:15, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:06,767 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4348/7340 [155:48<107:13, 27.9 steps/min]\u001b[92m18:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:07,462 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:02:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:02:08,136 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 247})\n",
+ "2025-08-11 18:02:08,779 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4349/7340 [155:50<107:10, 27.9 steps/min]2025-08-11 18:02:09,461 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:02:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:11,248 - agent.ComputerAgent - INFO - Computer: click({'x': 336, 'y': 458, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 336, 'y': 458, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4351/7340 [155:53<107:05, 27.9 steps/min]\u001b[92m18:02:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:12,545 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:02:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:13,164 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4352/7340 [155:54<107:02, 27.9 steps/min]2025-08-11 18:02:13,804 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:02:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4353/7340 [155:56<107:00, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:15,504 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:02:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:16,175 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 239})\n",
+ " 59%|███████████████████████-----------------| 4353/7340 [155:57<107:01, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:17,474 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:02:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4355/7340 [155:59<106:54, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:18,166 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 317})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 317})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 59%|███████████████████████-----------------| 4355/7340 [156:00<106:55, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:19,992 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': -1000, 'scroll_y': 0, 'x': 81, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': -1000, 'scroll_y': 0, 'x': 81, 'y': 731})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4356/7340 [156:01<106:53, 27.9 steps/min]2025-08-11 18:02:20,673 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:02:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:21,985 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:02:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4357/7340 [156:04<106:51, 27.9 steps/min]\u001b[92m18:02:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:23,261 - agent.ComputerAgent - INFO - Computer: click({'x': 587, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 587, 'y': 504})\n",
+ "\u001b[92m18:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:23,880 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 59%|███████████████████████-----------------| 4358/7340 [156:05<106:48, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:24,541 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 382, 'y': 286}, {'x': 382, 'y': 286}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 382, 'y': 286}, {'x': 382, 'y': 286}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:25,582 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:02:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:26,936 - agent.ComputerAgent - INFO - Computer: type({'text': 'about:profiles'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'about:profiles'})\n",
+ " 59%|███████████████████████-----------------| 4359/7340 [156:08<106:46, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:28,298 - agent.ComputerAgent - INFO - Computer: click({'x': 240, 'y': 180, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 240, 'y': 180, 'button': 'left'})\n",
+ "2025-08-11 18:02:28,973 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:02:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4361/7340 [156:10<106:41, 27.9 steps/min]2025-08-11 18:02:29,995 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:31,389 - agent.ComputerAgent - INFO - Computer: type({'text': 'SEA'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'SEA'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4362/7340 [156:13<106:39, 27.9 steps/min]\u001b[92m18:02:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:33,104 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 59%|███████████████████████-----------------| 4364/7340 [156:15<106:33, 27.9 steps/min]\u001b[92m18:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:34,454 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:35,096 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 248})\n",
+ "2025-08-11 18:02:35,778 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 367})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 367})\n",
+ " 59%|███████████████████████-----------------| 4364/7340 [156:17<106:34, 27.9 steps/min]2025-08-11 18:02:36,427 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:02:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4366/7340 [156:18<106:28, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:37,994 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:02:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 59%|███████████████████████-----------------| 4366/7340 [156:19<106:29, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4366/7340 [156:21<106:30, 27.9 steps/min]\u001b[92m18:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:41,569 - agent.ComputerAgent - INFO - Computer: click({'x': 289, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 289, 'y': 52})\n",
+ "\u001b[92m18:02:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 59%|███████████████████████-----------------| 4367/7340 [156:23<106:28, 27.9 steps/min]2025-08-11 18:02:42,233 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:02:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:02:42,922 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "\u001b[92m18:02:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4368/7340 [156:24<106:25, 27.9 steps/min]2025-08-11 18:02:43,625 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 410})\n",
+ "2025-08-11 18:02:44,306 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:02:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4369/7340 [156:26<106:22, 27.9 steps/min]2025-08-11 18:02:45,003 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:02:45,656 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|███████████████████████-----------------| 4370/7340 [156:28<106:20, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:46,953 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:02:47,654 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:48,958 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffmpeg -version'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffmpeg -version'})\n",
+ " 60%|███████████████████████-----------------| 4370/7340 [156:30<106:22, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:50,303 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:02:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:02:50,979 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 58, 'y': 167}, {'x': 179, 'y': 296}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 58, 'y': 167}, {'x': 179, 'y': 296}]})\n",
+ " 60%|███████████████████████-----------------| 4371/7340 [156:32<106:20, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:02:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:52,111 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 239})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4372/7340 [156:33<106:17, 27.9 steps/min]2025-08-11 18:02:52,744 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:02:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:54,069 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 60%|███████████████████████-----------------| 4374/7340 [156:35<106:11, 27.9 steps/min]2025-08-11 18:02:54,696 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:02:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4375/7340 [156:37<106:09, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:02:57,407 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:02:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4375/7340 [156:39<106:09, 27.9 steps/min]2025-08-11 18:02:58,100 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:02:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:02:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:02:59,432 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:02:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4375/7340 [156:41<106:11, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4375/7340 [156:42<106:12, 27.9 steps/min]\u001b[92m18:03:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:01,137 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 93, 'y': 248}, {'x': 453, 'y': 260}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 93, 'y': 248}, {'x': 453, 'y': 260}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:03,525 - agent.ComputerAgent - INFO - Computer: click({'x': 411, 'y': 431, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 411, 'y': 431, 'button': 'left'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:04,185 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4375/7340 [156:46<106:14, 27.9 steps/min]\u001b[92m18:03:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:05,474 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:06,153 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ "\u001b[92m18:03:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4377/7340 [156:47<106:08, 27.9 steps/min]2025-08-11 18:03:06,785 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 638, 'y': 478}, {'x': 638, 'y': 478}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 638, 'y': 478}, {'x': 638, 'y': 478}]})\n",
+ " 60%|███████████████████████-----------------| 4379/7340 [156:48<106:02, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6200987a-bb64-4bc4-998c-b40e29f81c9d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 60%|███████████████████████-----------------| 4380/7340 [156:51<105:59, 27.9 steps/min]\u001b[92m18:03:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:03:09,941 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 410})\n",
+ " 60%|███████████████████████-----------------| 4380/7340 [156:52<106:00, 27.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4381/7340 [156:53<105:57, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it] 27.9 steps/min]2025-08-11 18:03:13,001 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:03:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]\u001b[92m18:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:16,559 - agent.ComputerAgent - INFO - Computer: type({'text': 'about:profiles\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'about:profiles\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4381/7340 [156:58<106:01, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:17,943 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m18:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:03:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|███████████████████████-----------------| 4382/7340 [157:00<105:59, 27.9 steps/min]2025-08-11 18:03:19,288 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 285})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 285})\n",
+ "2025-08-11 18:03:19,926 - agent.ComputerAgent - INFO - Computer: click({'x': 412, 'y': 354})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 412, 'y': 354})\n",
+ "2025-08-11 18:03:20,608 - agent.ComputerAgent - INFO - Computer: double_click({'x': 334, 'y': 466})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 334, 'y': 466})\n",
+ "\u001b[92m18:03:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:21,246 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:21,890 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 53})\n",
+ " 60%|███████████████████████-----------------| 4383/7340 [157:03<105:57, 27.9 steps/min]\u001b[92m18:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:22,553 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 577})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:23,600 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:03:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:03:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:03:24,942 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 60%|███████████████████████-----------------| 4387/7340 [157:06<105:45, 27.9 steps/min]2025-08-11 18:03:25,636 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 253, 'y': 182}, {'x': 210, 'y': 537}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 253, 'y': 182}, {'x': 210, 'y': 537}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:03:26,922 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 60%|███████████████████████-----------------| 4389/7340 [157:08<105:39, 27.9 steps/min]2025-08-11 18:03:27,604 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:03:28,284 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:03:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:10<105:36, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:11<105:37, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3366ea7c-a6bb-4862-a1d3-a12e59d541a5/close \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:12<105:38, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:03:31,192 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:03:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:13<105:38, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:03:32,373 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:03:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:14<105:39, 27.9 steps/min]2025-08-11 18:03:33,053 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:03:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:03:35,215 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:03:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:19<105:43, 27.9 steps/min]\u001b[92m18:03:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:38,477 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:03:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:03:39,138 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:03:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:03:40,455 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:22<105:44, 27.9 steps/min]2025-08-11 18:03:41,123 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:03:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]2025-08-11 18:03:42,057 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:03:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:23<105:46, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:03:42,756 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:03:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it] 27.9 steps/min]\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:28<105:48, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 60%|███████████████████████-----------------| 4390/7340 [157:29<105:49, 27.9 steps/min]\u001b[92m18:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:47,691 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 245})\n",
+ "\u001b[92m18:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:48,375 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 635})\n",
+ " 60%|███████████████████████-----------------| 4391/7340 [157:30<105:46, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 60%|███████████████████████-----------------| 4392/7340 [157:31<105:43, 27.9 steps/min]\u001b[92m18:03:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:50,155 - agent.ComputerAgent - INFO - Computer: click({'x': 731, 'y': 603})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 731, 'y': 603})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|███████████████████████-----------------| 4392/7340 [157:32<105:44, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:03:52,148 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 193, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 193, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|███████████████████████-----------------| 4393/7340 [157:34<105:42, 27.9 steps/min]\u001b[92m18:03:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:53,456 - agent.ComputerAgent - INFO - Computer: click({'x': 583, 'y': 267})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 583, 'y': 267})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:03:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4394/7340 [157:37<105:41, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:03:56,762 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 737})\n",
+ "\u001b[92m18:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:03:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:58,085 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ "2025-08-11 18:03:58,717 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:03:58,718 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 15, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 15, 'y': 427})\n",
+ "\u001b[92m18:03:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:03:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:03:59,757 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:03:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|███████████████████████-----------------| 4395/7340 [157:42<105:40, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:01,144 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 406})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 406})\n",
+ "2025-08-11 18:04:01,793 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 113})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 113})\n",
+ "\u001b[92m18:04:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:02,442 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:04:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:03,111 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 741})\n",
+ " 60%|███████████████████████-----------------| 4398/7340 [157:44<105:31, 27.9 steps/min]\u001b[92m18:04:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:03,785 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:04:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:05,241 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 577})\n",
+ " 60%|███████████████████████-----------------| 4402/7340 [157:51<105:21, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:04:11,101 - agent.ComputerAgent - INFO - Computer: type({'text': '\\nffmpeg -version\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\nffmpeg -version\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|███████████████████████-----------------| 4402/7340 [157:52<105:22, 27.9 steps/min]2025-08-11 18:04:11,702 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:13,629 - agent.ComputerAgent - INFO - Computer: click({'x': 357, 'y': 300})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 357, 'y': 300})\n",
+ " 60%|███████████████████████-----------------| 4403/7340 [157:55<105:20, 27.9 steps/min]2025-08-11 18:04:14,262 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:14,902 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4404/7340 [157:56<105:17, 27.9 steps/min]2025-08-11 18:04:15,946 - agent.ComputerAgent - INFO - Computer: click({'x': 522, 'y': 488})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 522, 'y': 488})\n",
+ "2025-08-11 18:04:16,631 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:04:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:17,270 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:04:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4404/7340 [157:59<105:19, 27.9 steps/min]2025-08-11 18:04:17,912 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:04:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4405/7340 [158:00<105:16, 27.9 steps/min]2025-08-11 18:04:19,435 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:04:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4405/7340 [158:01<105:17, 27.9 steps/min]\u001b[92m18:04:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4406/7340 [158:03<105:14, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:22,477 - agent.ComputerAgent - INFO - Computer: click({'x': 185, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 185, 'y': 105})\n",
+ "\u001b[92m18:04:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4406/7340 [158:04<105:16, 27.9 steps/min]\u001b[92m18:04:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:23,848 - agent.ComputerAgent - INFO - Computer: click({'x': 928, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 928, 'y': 305})\n",
+ "\u001b[92m18:04:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:24,499 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:04:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:25,161 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:04:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:04:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4409/7340 [158:07<105:06, 27.9 steps/min]\u001b[92m18:04:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:25,808 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 386})\n",
+ "2025-08-11 18:04:26,793 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:04:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:27,868 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 91, 'y': 248}, {'x': 469, 'y': 318}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 91, 'y': 248}, {'x': 469, 'y': 318}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c898e6a1-68ea-4822-8d12-52633e08a154/close \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4412/7340 [158:10<104:58, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4756ec69-c09e-4f99-a5ad-21ec6c831003/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4412/7340 [158:12<104:59, 27.9 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4412/7340 [158:14<105:00, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.56s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:04:33,807 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:04:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4412/7340 [158:15<105:01, 27.9 steps/min]2025-08-11 18:04:34,619 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.55s/it]INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:04:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]\n",
+ "2025-08-11 18:04:36,045 - agent.ComputerAgent - INFO - Computer: type({'text': 'about:profiles'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'about:profiles'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4412/7340 [158:18<105:03, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/941d9ec3-7c28-40f6-b948-70db95115571/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:04:38,127 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:04:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:38,812 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 404})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it] 27.9 steps/min]2025-08-11 18:04:39,487 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:04:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4414/7340 [158:21<104:58, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4414/7340 [158:22<104:59, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:04:41,877 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:04:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it] 27.9 steps/min]\n",
+ " 60%|████████████████████████----------------| 4414/7340 [158:24<105:00, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4414/7340 [158:26<105:01, 27.9 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:45,058 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:04:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:04:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:46,134 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4414/7340 [158:28<105:03, 27.9 steps/min]2025-08-11 18:04:47,458 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 698})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 698})\n",
+ "2025-08-11 18:04:48,137 - agent.ComputerAgent - INFO - Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 656, 'y': 304})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:49,487 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 418})\n",
+ "\u001b[92m18:04:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:50,830 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:04:50,832 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 60%|████████████████████████----------------| 4415/7340 [158:32<105:02, 27.8 steps/min]2025-08-11 18:04:51,524 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 382})\n",
+ "\u001b[92m18:04:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:52,186 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 299})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 299})\n",
+ "2025-08-11 18:04:52,837 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4418/7340 [158:34<104:52, 27.9 steps/min]2025-08-11 18:04:53,498 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:04:54,177 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:04:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4420/7340 [158:36<104:46, 27.9 steps/min]\u001b[92m18:04:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:04:55,993 - agent.ComputerAgent - INFO - Computer: click({'x': 291, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 291, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4420/7340 [158:39<104:48, 27.9 steps/min]\u001b[92m18:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:04:59,143 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "\u001b[92m18:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4421/7340 [158:40<104:46, 27.9 steps/min]2025-08-11 18:04:59,806 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 266})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 266})\n",
+ "2025-08-11 18:05:00,498 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 93, 'y': 245}, {'x': 453, 'y': 244}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 93, 'y': 245}, {'x': 453, 'y': 244}]})\n",
+ "2025-08-11 18:05:01,187 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:05:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:01,858 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:05:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4422/7340 [158:44<104:44, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:03,569 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4424/7340 [158:45<104:38, 27.9 steps/min]2025-08-11 18:05:04,198 - agent.ComputerAgent - INFO - Computer: double_click({'x': 184, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 184, 'y': 105})\n",
+ "2025-08-11 18:05:04,868 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:05:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:05,559 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:05:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:06,243 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:05:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4424/7340 [158:47<104:40, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:07,312 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:05:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4425/7340 [158:49<104:37, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:05:07,988 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:05:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:10,128 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:05:10,129 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:05:12,133 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffmpeg -ss 2 -i /home/user/fullvideo.mp4 -t 2 -c:v libx264 -preset veryfast -crf 23 -c:a aac -b:a 128k -movflags +faststart /home/user/fullvideo_trim_2s_to_4s.mp4'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffmpeg -ss 2 -i /home/user/fullvideo.mp4 -t 2 -c:v libx264 -preset veryfast -crf 23 -c:a aac -b:a 128k -movflags +faststart /home/user/fullvideo_trim_2s_to_4s.mp4'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4425/7340 [158:54<104:40, 27.8 steps/min]\u001b[92m18:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:13,449 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:05:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:14,092 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4427/7340 [158:55<104:34, 27.9 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:05:14,746 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:15,408 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 432})\n",
+ "2025-08-11 18:05:16,073 - agent.ComputerAgent - INFO - Computer: click({'x': 553, 'y': 113})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 553, 'y': 113})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4427/7340 [158:58<104:36, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:18,109 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "\u001b[92m18:05:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4429/7340 [158:59<104:30, 27.9 steps/min]2025-08-11 18:05:18,768 - agent.ComputerAgent - INFO - Computer: click({'x': 148, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 148, 'y': 91})\n",
+ "2025-08-11 18:05:19,397 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:05:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4430/7340 [159:01<104:27, 27.9 steps/min]\u001b[92m18:05:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:21,427 - agent.ComputerAgent - INFO - Computer: type({'text': 'SEA'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'SEA'})\n",
+ " 60%|████████████████████████----------------| 4431/7340 [159:03<104:25, 27.9 steps/min]\u001b[92m18:05:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:05:22,111 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 428})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/reset \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4432/7340 [159:04<104:22, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4433/7340 [159:06<104:20, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:25,366 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:05:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4433/7340 [159:07<104:20, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4433/7340 [159:08<104:21, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ab746d73-0661-41f7-b989-ce2eb2890384/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:27,767 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:05:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4433/7340 [159:10<104:23, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:30,470 - agent.ComputerAgent - INFO - Computer: type({'text': 'sender_name\\tsender_address\\tsubject\\tCC\\tnumber_of_attachments'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sender_name\\tsender_address\\tsubject\\tCC\\tnumber_of_attachments'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.70s/it]\u001b[92m18:05:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:05:33,635 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:05:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 18:05:34,530 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:05:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:35,988 - agent.ComputerAgent - INFO - Computer: type({'text': '512'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:Computer: type({'text': '512'})\n",
+ " 60%|████████████████████████----------------| 4433/7340 [159:17<104:27, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m18:05:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:05:37,703 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:05:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4435/7340 [159:19<104:21, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4435/7340 [159:20<104:22, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:39,087 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:39,763 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 60%|████████████████████████----------------| 4435/7340 [159:21<104:22, 27.8 steps/min]\u001b[92m18:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:40,461 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 577})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:41,109 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:05:41,111 - agent.ComputerAgent - INFO - Computer: click({'x': 92, 'y': 315})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 92, 'y': 315})\n",
+ "\u001b[92m18:05:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4435/7340 [159:22<104:23, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:41,861 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 245})\n",
+ "2025-08-11 18:05:42,544 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 332})\n",
+ "\u001b[92m18:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 60%|████████████████████████----------------| 4437/7340 [159:24<104:17, 27.8 steps/min]2025-08-11 18:05:43,622 - agent.ComputerAgent - INFO - Computer: click({'x': 559, 'y': 504})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 559, 'y': 504})\n",
+ "2025-08-11 18:05:44,270 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 483})\n",
+ "2025-08-11 18:05:44,938 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 398})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 398})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ " 60%|████████████████████████----------------| 4439/7340 [159:26<104:12, 27.8 steps/min]2025-08-11 18:05:45,631 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:05:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 61%|████████████████████████----------------| 4442/7340 [159:28<104:02, 27.9 steps/min]\u001b[92m18:05:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:05:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:05:47,585 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 53})\n",
+ "2025-08-11 18:05:48,218 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4442/7340 [159:29<104:03, 27.8 steps/min]2025-08-11 18:05:48,850 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:05:48,851 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 693})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 693})\n",
+ "2025-08-11 18:05:49,527 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:05:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4444/7340 [159:32<103:57, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:33<103:51, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a00ed5ae-3ff5-4a40-babb-32008e5ccbf2/close \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:34<103:52, 27.9 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:53,846 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:05:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:35<103:52, 27.9 steps/min]2025-08-11 18:05:54,532 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:05:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:05:55,567 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:05:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:37<103:54, 27.9 steps/min]2025-08-11 18:05:56,202 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:05:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:05:56,832 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:05:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:39<103:55, 27.8 steps/min]2025-08-11 18:05:58,187 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:05:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:05:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:40<103:56, 27.8 steps/min]2025-08-11 18:05:59,536 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:05:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:06:00,207 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:06:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:42<103:57, 27.8 steps/min]\u001b[92m18:06:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]\u001b[92m18:06:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3fe89487-7164-4dc0-9512-0a0b26cf8e83/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:44<103:58, 27.8 steps/min]\u001b[92m18:06:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:45<103:59, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 18:06:05,402 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4446/7340 [159:47<104:01, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:49<103:58, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:53<104:01, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]\n",
+ "\u001b[92m18:06:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:54<104:01, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:55<104:02, 27.8 steps/min]2025-08-11 18:06:14,816 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:56<104:03, 27.8 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:06:16,497 - agent.ComputerAgent - INFO - Computer: double_click({'x': 334, 'y': 466})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 334, 'y': 466})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4447/7340 [159:58<104:04, 27.8 steps/min]\u001b[92m18:06:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4448/7340 [160:00<104:01, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4448/7340 [160:01<104:02, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:06:20,499 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:06:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4448/7340 [160:02<104:03, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4448/7340 [160:03<104:03, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:06:22,793 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4448/7340 [160:04<104:04, 27.8 steps/min]2025-08-11 18:06:23,438 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:06:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:06:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:06<104:02, 27.8 steps/min]\u001b[92m18:06:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:07<104:03, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:08<104:03, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:09<104:04, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:11<104:05, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:06:31,199 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:06:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f096381e-eb5b-49dc-8943-c821405cce10/reset \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:12<104:06, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:13<104:07, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:06:33,388 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:06:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:15<104:07, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:16<104:08, 27.8 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4449/7340 [160:17<104:09, 27.8 steps/min]\u001b[92m18:06:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:06:36,740 - agent.ComputerAgent - INFO - Computer: click({'x': 254, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 254, 'y': 324})\n",
+ " 61%|████████████████████████----------------| 4450/7340 [160:21<104:08, 27.8 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4450/7340 [160:23<104:09, 27.7 steps/min]\u001b[92m18:06:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:06:42,452 - agent.ComputerAgent - INFO - Computer: click({'x': 546, 'y': 452})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 546, 'y': 452})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4450/7340 [160:24<104:10, 27.7 steps/min]2025-08-11 18:06:43,762 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:06:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:27<104:08, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:31<104:11, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:06:50,811 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:06:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:34<104:13, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:06:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:35<104:14, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:42<104:18, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:07:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:44<104:19, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4451/7340 [160:45<104:20, 27.7 steps/min]\u001b[92m18:07:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:07:04,546 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 203})\n",
+ " 61%|████████████████████████----------------| 4452/7340 [160:50<104:20, 27.7 steps/min]\u001b[92m18:07:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:07:09,318 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 61%|████████████████████████----------------| 4452/7340 [160:51<104:20, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4453/7340 [160:52<104:17, 27.7 steps/min]2025-08-11 18:07:11,542 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:07:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4453/7340 [160:56<104:20, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4453/7340 [160:57<104:21, 27.7 steps/min]2025-08-11 18:07:16,919 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:07:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4453/7340 [160:58<104:21, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4453/7340 [161:10<104:29, 27.6 steps/min]\u001b[92m18:07:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:07:29,796 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 247})\n",
+ " 61%|████████████████████████----------------| 4453/7340 [161:11<104:30, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 61%|████████████████████████----------------| 4454/7340 [161:12<104:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:07:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4454/7340 [161:14<104:28, 27.6 steps/min]\u001b[92m18:07:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:07:33,693 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:07:33,693 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 427})\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:16<104:26, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:07:35,883 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:07:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:19<104:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:07:39,640 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:07:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:21<104:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:25<104:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:07:45,082 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:26<104:33, 27.6 steps/min]2025-08-11 18:07:46,223 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:07:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:27<104:33, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:31<104:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:07:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c83605a3-e62d-48d7-8568-f181d5627773/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:32<104:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:07:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:34<104:37, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:07:53,330 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:07:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:40<104:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:00,290 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:08:00,291 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:42<104:42, 27.6 steps/min]\u001b[92m18:08:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:00,970 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 53})\n",
+ "2025-08-11 18:08:01,591 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4455/7340 [161:43<104:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4456/7340 [161:44<104:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4456/7340 [161:45<104:41, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:04,467 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 484})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 484})\n",
+ " 61%|████████████████████████----------------| 4456/7340 [161:46<104:42, 27.5 steps/min]\u001b[92m18:08:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:06,159 - agent.ComputerAgent - INFO - Computer: click({'x': 669, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 669, 'y': 539})\n",
+ " 61%|████████████████████████----------------| 4457/7340 [161:47<104:39, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:07,933 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ "\u001b[92m18:08:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4458/7340 [161:49<104:37, 27.5 steps/min]2025-08-11 18:08:08,564 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 240})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 240})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:09,222 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:08:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4458/7340 [161:50<104:37, 27.5 steps/min]\u001b[92m18:08:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:09,902 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:08:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:08:10,594 - agent.ComputerAgent - INFO - Computer: click({'x': 464, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 464, 'y': 418})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4459/7340 [161:52<104:35, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:11,242 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:08:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:12,670 - agent.ComputerAgent - INFO - Computer: click({'x': 560, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 560, 'y': 249})\n",
+ "\u001b[92m18:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4460/7340 [161:54<104:32, 27.5 steps/min]2025-08-11 18:08:13,380 - agent.ComputerAgent - INFO - Computer: click({'x': 661, 'y': 303})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 661, 'y': 303})\n",
+ "2025-08-11 18:08:14,057 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4461/7340 [161:55<104:30, 27.5 steps/min]\u001b[92m18:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:15,065 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 59})\n",
+ " 61%|████████████████████████----------------| 4462/7340 [161:56<104:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:16,366 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:08:16,368 - agent.ComputerAgent - INFO - Computer: click({'x': 833, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 833, 'y': 382})\n",
+ " 61%|████████████████████████----------------| 4464/7340 [161:59<104:21, 27.6 steps/min]\u001b[92m18:08:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:08:18,554 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:08:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4464/7340 [162:00<104:22, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:20,230 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:08:20,232 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 761})\n",
+ " 61%|████████████████████████----------------| 4464/7340 [162:01<104:23, 27.6 steps/min]2025-08-11 18:08:20,890 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:08:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:21,529 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4465/7340 [162:03<104:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:08:22,202 - agent.ComputerAgent - INFO - Computer: click({'x': 253, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 253, 'y': 321})\n",
+ "2025-08-11 18:08:22,842 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:08:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4465/7340 [162:04<104:21, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:23,528 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:08:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:24,199 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 525})\n",
+ " 61%|████████████████████████----------------| 4466/7340 [162:05<104:18, 27.6 steps/min]\u001b[92m18:08:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:26,108 - agent.ComputerAgent - INFO - Computer: click({'x': 589, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 589, 'y': 115})\n",
+ " 61%|████████████████████████----------------| 4467/7340 [162:07<104:16, 27.6 steps/min]\u001b[92m18:08:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:27,291 - agent.ComputerAgent - INFO - Computer: double_click({'x': 91, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 91, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4468/7340 [162:09<104:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:28,598 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 247})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4469/7340 [162:10<104:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:08:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4470/7340 [162:12<104:09, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:31,842 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:08:32,522 - agent.ComputerAgent - INFO - Computer: click({'x': 853, 'y': 295})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 853, 'y': 295})\n",
+ "\u001b[92m18:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:33,144 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:08:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4470/7340 [162:14<104:10, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:33,806 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 64})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4471/7340 [162:16<104:07, 27.6 steps/min]2025-08-11 18:08:35,146 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 197, 'y': 104}, {'x': 554, 'y': 281}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 197, 'y': 104}, {'x': 554, 'y': 281}]})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:35,823 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:08:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4472/7340 [162:17<104:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:36,466 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 193})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:37,125 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:08:37,801 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4473/7340 [162:19<104:02, 27.6 steps/min]2025-08-11 18:08:38,490 - agent.ComputerAgent - INFO - Computer: click({'x': 220, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 220, 'y': 195})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:39,132 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:08:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4474/7340 [162:20<103:59, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:39,846 - agent.ComputerAgent - INFO - Computer: click({'x': 672, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 672, 'y': 539})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b38f9458-06e6-46c5-afef-cd85f3b4f340/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 61%|████████████████████████----------------| 4475/7340 [162:22<103:57, 27.6 steps/min]\u001b[92m18:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:41,170 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 627})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 627})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:08:42,459 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Videos\\nls -lah\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Videos\\nls -lah\\n'})\n",
+ "\u001b[92m18:08:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4476/7340 [162:24<103:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:43,807 - agent.ComputerAgent - INFO - Computer: click({'x': 600, 'y': 311})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 600, 'y': 311})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:08:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it] 27.6 steps/min]2025-08-11 18:08:45,863 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:08:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]2025-08-11 18:08:47,421 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:29<103:47, 27.6 steps/min]2025-08-11 18:08:48,103 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:08:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 18:08:48,983 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:08:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:30<103:48, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:08:50,175 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:31<103:49, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:32<103:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:08:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:08:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:35<103:51, 27.5 steps/min]\u001b[92m18:08:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:54,573 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 52})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:55,254 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:55,923 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4479/7340 [162:37<103:52, 27.5 steps/min]\u001b[92m18:08:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:08:56,585 - agent.ComputerAgent - INFO - Computer: double_click({'x': 204, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 204, 'y': 111})\n",
+ "2025-08-11 18:08:57,214 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 60})\n",
+ "2025-08-11 18:08:57,849 - agent.ComputerAgent - INFO - Computer: double_click({'x': 253, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 253, 'y': 324})\n",
+ "2025-08-11 18:08:58,548 - agent.ComputerAgent - INFO - Computer: double_click({'x': 461, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 461, 'y': 418})\n",
+ "\u001b[92m18:08:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:08:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:08:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4480/7340 [162:40<103:51, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:08:59,827 - agent.ComputerAgent - INFO - Computer: click({'x': 1004, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1004, 'y': 10})\n",
+ "2025-08-11 18:09:00,471 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 248})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4484/7340 [162:42<103:38, 27.6 steps/min]\u001b[92m18:09:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:01,832 - agent.ComputerAgent - INFO - Computer: click({'x': 844, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 844, 'y': 404})\n",
+ "\u001b[92m18:09:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:02,479 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "2025-08-11 18:09:03,124 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:09:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4486/7340 [162:45<103:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:04,422 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 61%|████████████████████████----------------| 4488/7340 [162:46<103:26, 27.6 steps/min]\u001b[92m18:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:05,578 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 203, 'y': 136}, {'x': 298, 'y': 558}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 203, 'y': 136}, {'x': 298, 'y': 558}]})\n",
+ " 61%|████████████████████████----------------| 4489/7340 [162:48<103:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:07,744 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:09:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4489/7340 [162:49<103:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:09,054 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:09:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:09:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4489/7340 [162:50<103:25, 27.6 steps/min]2025-08-11 18:09:09,747 - agent.ComputerAgent - INFO - Computer: click({'x': 826, 'y': 84})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 826, 'y': 84})\n",
+ "2025-08-11 18:09:10,387 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:11,013 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4489/7340 [162:52<103:26, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:11,732 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:12,413 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4490/7340 [162:54<103:24, 27.6 steps/min]\u001b[92m18:09:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:13,754 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:09:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:09:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4490/7340 [162:56<103:25, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:15,488 - agent.ComputerAgent - INFO - Computer: click({'x': 451, 'y': 240})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 451, 'y': 240})\n",
+ "\u001b[92m18:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:16,804 - agent.ComputerAgent - INFO - Computer: type({'text': 'find ~ -maxdepth 3 -type f -iname \"*.mp4\" -o -iname \"*.mkv\" -o -iname \"*.mov\"\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find ~ -maxdepth 3 -type f -iname \"*.mp4\" -o -iname \"*.mkv\" -o -iname \"*.mov\"\\n'})\n",
+ " 61%|████████████████████████----------------| 4490/7340 [162:58<103:26, 27.6 steps/min]2025-08-11 18:09:17,484 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:09:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:18,132 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:09:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:18,773 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:09:18,774 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 278})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4492/7340 [163:05<103:24, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:24,322 - agent.ComputerAgent - INFO - Computer: click({'x': 676, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 676, 'y': 539})\n",
+ "2025-08-11 18:09:24,990 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 95})\n",
+ "\u001b[92m18:09:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4493/7340 [163:06<103:21, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:25,628 - agent.ComputerAgent - INFO - Computer: double_click({'x': 275, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 275, 'y': 105})\n",
+ "2025-08-11 18:09:26,250 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:09:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4495/7340 [163:08<103:15, 27.6 steps/min]2025-08-11 18:09:27,425 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:09:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4496/7340 [163:09<103:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:28,055 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:09:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4496/7340 [163:10<103:12, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4496/7340 [163:11<103:13, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:09:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:30,798 - agent.ComputerAgent - INFO - Computer: click({'x': 849, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 849, 'y': 271})\n",
+ " 61%|████████████████████████----------------| 4496/7340 [163:12<103:14, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:32,155 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:09:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:09:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:33,500 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ " 61%|████████████████████████----------------| 4497/7340 [163:15<103:12, 27.5 steps/min]2025-08-11 18:09:34,194 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:09:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:34,864 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:09:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:35,520 - agent.ComputerAgent - INFO - Computer: move({'x': 102, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 102, 'y': 184})\n",
+ "2025-08-11 18:09:36,194 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:09:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:36,875 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:09:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:39,262 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 61%|████████████████████████----------------| 4497/7340 [163:21<103:16, 27.5 steps/min]\u001b[92m18:09:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4498/7340 [163:23<103:14, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:09:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:42,984 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 134})\n",
+ "2025-08-11 18:09:43,681 - agent.ComputerAgent - INFO - Computer: double_click({'x': 252, 'y': 290})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 252, 'y': 290})\n",
+ "\u001b[92m18:09:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:09:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4498/7340 [163:25<103:15, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:44,363 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 248})\n",
+ "2025-08-11 18:09:45,023 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 348})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 348})\n",
+ "2025-08-11 18:09:45,688 - agent.ComputerAgent - INFO - Computer: click({'x': 938, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 938, 'y': 243})\n",
+ "\u001b[92m18:09:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4500/7340 [163:28<103:10, 27.5 steps/min]2025-08-11 18:09:47,014 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 304})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:47,711 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4503/7340 [163:29<103:00, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:48,339 - agent.ComputerAgent - INFO - Computer: click({'x': 264, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 264, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:49,660 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 61%|████████████████████████----------------| 4504/7340 [163:31<102:57, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 61%|████████████████████████----------------| 4506/7340 [163:32<102:51, 27.6 steps/min]\u001b[92m18:09:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:51,873 - agent.ComputerAgent - INFO - Computer: click({'x': 161, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 161, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 61%|████████████████████████----------------| 4506/7340 [163:33<102:52, 27.5 steps/min]2025-08-11 18:09:52,565 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:09:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:53,928 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~\\nls -lh video.*\\nffmpeg -version\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~\\nls -lh video.*\\nffmpeg -version\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:55,216 - agent.ComputerAgent - INFO - Computer: type({'text': 'Sales & COGS'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Sales & COGS'})\n",
+ " 61%|████████████████████████----------------| 4507/7340 [163:36<102:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:09:55,845 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:09:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:57,115 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:09:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4509/7340 [163:38<102:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:09:57,808 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:09:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:09:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:09:58,505 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:09:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:09:59,144 - agent.ComputerAgent - INFO - Computer: click({'x': 670, 'y': 229})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 670, 'y': 229})\n",
+ "2025-08-11 18:09:59,795 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:00,463 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:10:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 61%|████████████████████████----------------| 4509/7340 [163:42<102:46, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:01,798 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:10:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:10:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:04,449 - agent.ComputerAgent - INFO - Computer: type({'text': 'Page 1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Page 1'})\n",
+ " 61%|████████████████████████----------------| 4510/7340 [163:46<102:45, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:05,162 - agent.ComputerAgent - INFO - Computer: click({'x': 495, 'y': 396})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 495, 'y': 396})\n",
+ "\u001b[92m18:10:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:06,446 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:10:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:07,129 - agent.ComputerAgent - INFO - Computer: click({'x': 799, 'y': 399})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 799, 'y': 399})\n",
+ "\u001b[92m18:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 61%|████████████████████████----------------| 4511/7340 [163:48<102:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:07,820 - agent.ComputerAgent - INFO - Computer: click({'x': 849, 'y': 317})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 849, 'y': 317})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:08,459 - agent.ComputerAgent - INFO - Computer: click({'x': 564, 'y': 445})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 564, 'y': 445})\n",
+ " 61%|████████████████████████----------------| 4513/7340 [163:50<102:37, 27.5 steps/min]2025-08-11 18:10:09,081 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:10:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:10:09,726 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:10:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4515/7340 [163:51<102:31, 27.6 steps/min]2025-08-11 18:10:10,405 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:10:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4515/7340 [163:52<102:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:10:11,596 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:10:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4515/7340 [163:53<102:32, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4515/7340 [163:55<102:33, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:14,075 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:10:15,139 - agent.ComputerAgent - INFO - Computer: move({'x': 102, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 102, 'y': 182})\n",
+ "\u001b[92m18:10:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4515/7340 [163:56<102:34, 27.5 steps/min]2025-08-11 18:10:15,820 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 760})\n",
+ "2025-08-11 18:10:16,466 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:10:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:10:18,470 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 62%|████████████████████████----------------| 4516/7340 [164:00<102:33, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:19,132 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4518/7340 [164:02<102:27, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:21,124 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 248})\n",
+ "\u001b[92m18:10:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:21,763 - agent.ComputerAgent - INFO - Computer: click({'x': 881, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 881, 'y': 283})\n",
+ "\u001b[92m18:10:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4518/7340 [164:03<102:28, 27.5 steps/min]2025-08-11 18:10:22,434 - agent.ComputerAgent - INFO - Computer: click({'x': 252, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 252, 'y': 60})\n",
+ "2025-08-11 18:10:23,057 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:23,696 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/reset \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4520/7340 [164:05<102:22, 27.5 steps/min]2025-08-11 18:10:24,390 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:10:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:10:25,699 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 18:10:26,338 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 5\n",
+ " - prompt_tokens: 3977\n",
+ " - total_tokens: 3982\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 0\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0050\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 5\n",
+ " - prompt_tokens: 3977\n",
+ " - total_tokens: 3982\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 0\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0050\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4522/7340 [164:08<102:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:29,112 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:10:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:30,422 - agent.ComputerAgent - INFO - Computer: type({'text': \"ffmpeg -hide_banner -i video.mp4 2>&1 | sed -n '1,120p'\\n\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"ffmpeg -hide_banner -i video.mp4 2>&1 | sed -n '1,120p'\\n\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4522/7340 [164:12<102:20, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:31,841 - agent.ComputerAgent - INFO - Computer: click({'x': 802, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 802, 'y': 437})\n",
+ "2025-08-11 18:10:32,505 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 540})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 540})\n",
+ "2025-08-11 18:10:33,146 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4523/7340 [164:14<102:17, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:33,803 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:34,476 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:10:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:35,141 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4525/7340 [164:17<102:12, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:36,437 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:10:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:37,794 - agent.ComputerAgent - INFO - Computer: click({'x': 1014, 'y': 31})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1014, 'y': 31})\n",
+ " 62%|████████████████████████----------------| 4526/7340 [164:19<102:10, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:38,466 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:39,125 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 354})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 354})\n",
+ " 62%|████████████████████████----------------| 4527/7340 [164:20<102:07, 27.5 steps/min]2025-08-11 18:10:39,795 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4528/7340 [164:22<102:04, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:42,145 - agent.ComputerAgent - INFO - Computer: click({'x': 437, 'y': 99})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 437, 'y': 99})\n",
+ "\u001b[92m18:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4528/7340 [164:24<102:06, 27.5 steps/min]\u001b[92m18:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:43,477 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:44,127 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:10:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:44,765 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:10:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:10:45,424 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4545/7340 [164:27<101:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:10:46,114 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:46,753 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 51, 'y': 730}, {'x': 991, 'y': 759}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 51, 'y': 730}, {'x': 991, 'y': 759}]})\n",
+ "2025-08-11 18:10:47,399 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:10:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4546/7340 [164:29<101:05, 27.6 steps/min]2025-08-11 18:10:48,065 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/91803c09-cf12-4c24-92ec-24bcf68c0897/close \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4547/7340 [164:30<101:02, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:10:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:50,780 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:10:50,781 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4547/7340 [164:33<101:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:10:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]\u001b[92m18:10:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4548/7340 [164:35<101:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]\u001b[92m18:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:10:55,827 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4548/7340 [164:38<101:04, 27.6 steps/min]\u001b[92m18:10:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "2025-08-11 18:10:57,356 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:10:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 62%|████████████████████████----------------| 4548/7340 [164:41<101:06, 27.6 steps/min]\u001b[92m18:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:00,417 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:11:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:11:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:01,755 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 445})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 445})\n",
+ "\u001b[92m18:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4548/7340 [164:43<101:07, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:02,411 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 181})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:03,058 - agent.ComputerAgent - INFO - Computer: click({'x': 879, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 879, 'y': 270})\n",
+ "\u001b[92m18:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:03,738 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 248})\n",
+ "\u001b[92m18:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4549/7340 [164:45<101:05, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:04,415 - agent.ComputerAgent - INFO - Computer: click({'x': 900, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 900, 'y': 203})\n",
+ "\u001b[92m18:11:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:05,052 - agent.ComputerAgent - INFO - Computer: click({'x': 835, 'y': 638})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 835, 'y': 638})\n",
+ "2025-08-11 18:11:05,713 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 182})\n",
+ "2025-08-11 18:11:06,367 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 417})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 417})\n",
+ "\u001b[92m18:11:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:07,000 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:11:07,001 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 427})\n",
+ "\u001b[92m18:11:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4552/7340 [164:48<100:56, 27.6 steps/min]2025-08-11 18:11:07,677 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 231})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 231})\n",
+ "2025-08-11 18:11:08,416 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 736})\n",
+ "2025-08-11 18:11:09,068 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 325})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 325})\n",
+ "2025-08-11 18:11:09,812 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:11:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4557/7340 [164:51<100:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 62%|████████████████████████----------------| 4560/7340 [164:52<100:31, 27.7 steps/min]\u001b[92m18:11:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:11,713 - agent.ComputerAgent - INFO - Computer: click({'x': 239, 'y': 323})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 239, 'y': 323})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:13,136 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffmpeg -y -i video.mp4 -map 0:s:0 -c:s srt subtitles.srt\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffmpeg -y -i video.mp4 -map 0:s:0 -c:s srt subtitles.srt\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4560/7340 [164:55<100:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 62%|████████████████████████----------------| 4562/7340 [164:56<100:26, 27.7 steps/min]\u001b[92m18:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:15,635 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 576})\n",
+ "\u001b[92m18:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:16,295 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 623, 'y': 730}, {'x': 526, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 623, 'y': 730}, {'x': 526, 'y': 730}]})\n",
+ "2025-08-11 18:11:16,967 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4562/7340 [164:58<100:27, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:17,628 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:11:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:18,274 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:00<100:21, 27.7 steps/min]2025-08-11 18:11:19,319 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:01<100:22, 27.7 steps/min]2025-08-11 18:11:19,977 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:21,016 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:21,686 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:22,325 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:11:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:04<100:24, 27.6 steps/min]2025-08-11 18:11:22,968 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:23,610 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:05<100:24, 27.6 steps/min]2025-08-11 18:11:24,257 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:24,908 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:06<100:25, 27.6 steps/min]2025-08-11 18:11:25,563 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:11:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:26,246 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:11:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:26,948 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:11:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:10<100:27, 27.6 steps/min]\u001b[92m18:11:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:28,917 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:11:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:11:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:29,951 - agent.ComputerAgent - INFO - Computer: click({'x': 339, 'y': 336})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 339, 'y': 336})\n",
+ " 62%|████████████████████████----------------| 4564/7340 [165:11<100:28, 27.6 steps/min]\u001b[92m18:11:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:30,650 - agent.ComputerAgent - INFO - Computer: click({'x': 950, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 950, 'y': 243})\n",
+ " 62%|████████████████████████----------------| 4566/7340 [165:13<100:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4566/7340 [165:15<100:24, 27.6 steps/min]\u001b[92m18:11:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:11:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:35,163 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 585})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 585})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:36,486 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ " 62%|████████████████████████----------------| 4566/7340 [165:18<100:25, 27.6 steps/min]2025-08-11 18:11:37,148 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:37,798 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4567/7340 [165:19<100:22, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:38,457 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:11:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:11:39,130 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:11:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4568/7340 [165:20<100:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4568/7340 [165:21<100:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4568/7340 [165:23<100:21, 27.6 steps/min]\u001b[92m18:11:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:43,012 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:44,352 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+x'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+x'})\n",
+ " 62%|████████████████████████----------------| 4568/7340 [165:26<100:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:44,988 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:11:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:46,273 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -lh\\nffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name:format_tags=title -of default=noprint_wrappers=1 video.mp4 || true\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -lh\\nffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name:format_tags=title -of default=noprint_wrappers=1 video.mp4 || true\\n'})\n",
+ "2025-08-11 18:11:46,948 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:11:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4569/7340 [165:28<100:21, 27.6 steps/min]2025-08-11 18:11:47,600 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:11:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4570/7340 [165:29<100:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 62%|████████████████████████----------------| 4570/7340 [165:31<100:19, 27.6 steps/min]\u001b[92m18:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:11:50,807 - agent.ComputerAgent - INFO - Computer: click({'x': 877, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 877, 'y': 283})\n",
+ "\u001b[92m18:11:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:51,468 - agent.ComputerAgent - INFO - Computer: click({'x': 92, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 92, 'y': 248})\n",
+ "\u001b[92m18:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4570/7340 [165:33<100:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:52,115 - agent.ComputerAgent - INFO - Computer: move({'x': 93, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 93, 'y': 185})\n",
+ "2025-08-11 18:11:52,733 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:11:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:11:54,067 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ " 62%|████████████████████████----------------| 4572/7340 [165:35<100:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:55,738 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ " 62%|████████████████████████----------------| 4573/7340 [165:37<100:12, 27.6 steps/min]\u001b[92m18:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:11:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:11:57,688 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:11:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:11:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:00,343 - agent.ComputerAgent - INFO - Computer: type({'text': 'profiles'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'profiles'})\n",
+ " 62%|████████████████████████----------------| 4573/7340 [165:42<100:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:00,993 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 320})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 320})\n",
+ "\u001b[92m18:12:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:02,355 - agent.ComputerAgent - INFO - Computer: click({'x': 730, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 730, 'y': 275})\n",
+ "\u001b[92m18:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4575/7340 [165:44<100:09, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:03,029 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 356})\n",
+ "\u001b[92m18:12:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:03,654 - agent.ComputerAgent - INFO - Computer: click({'x': 252, 'y': 58})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 252, 'y': 58})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:05,685 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 62%|████████████████████████----------------| 4577/7340 [165:47<100:04, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:06,335 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 593})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 593})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:07,683 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4580/7340 [165:49<99:55, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:08,371 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 52})\n",
+ "2025-08-11 18:12:09,067 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 525, 'y': 730}, {'x': 486, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 525, 'y': 730}, {'x': 486, 'y': 730}]})\n",
+ "\u001b[92m18:12:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4581/7340 [165:50<99:53, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:09,766 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 39})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 39})\n",
+ "2025-08-11 18:12:10,407 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:12:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4583/7340 [165:52<99:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/147c9dab-e768-40e5-a3b1-3439f8a0138d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4584/7340 [165:53<99:44, 27.6 steps/min]\u001b[92m18:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:13,046 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ "\u001b[92m18:12:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4584/7340 [165:54<99:45, 27.6 steps/min]2025-08-11 18:12:13,693 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 741})\n",
+ "2025-08-11 18:12:14,359 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:12:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4585/7340 [165:56<99:42, 27.6 steps/min]2025-08-11 18:12:15,430 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:12:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4586/7340 [165:57<99:39, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:16,598 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:17,963 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffmpeg -y -i video.mp4 -map 0 -map -0:s -c copy video_nosubs.mp4\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffmpeg -y -i video.mp4 -map 0 -map -0:s -c copy video_nosubs.mp4\\n'})\n",
+ " 62%|████████████████████████----------------| 4586/7340 [165:59<99:41, 27.6 steps/min]2025-08-11 18:12:18,603 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:12:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:19,282 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:12:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:12:19,919 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:12:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:01<99:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:20,580 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:02<99:39, 27.6 steps/min]2025-08-11 18:12:21,927 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:12:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:12:23,003 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:12:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:04<99:40, 27.6 steps/min]2025-08-11 18:12:23,701 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:12:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:06<99:41, 27.6 steps/min]\u001b[92m18:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:12:25,679 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:12:26,339 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:12:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:08<99:42, 27.6 steps/min]2025-08-11 18:12:27,305 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.71s/it]\u001b[92m18:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:09<99:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:28,496 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:12:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:10<99:43, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:11<99:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]27.6 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:13<99:45, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:14<99:46, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:33,829 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 52})\n",
+ "\u001b[92m18:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:34,464 - agent.ComputerAgent - INFO - Computer: click({'x': 286, 'y': 316})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 286, 'y': 316})\n",
+ "\u001b[92m18:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 62%|████████████████████████----------------| 4587/7340 [166:16<99:47, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:35,110 - agent.ComputerAgent - INFO - Computer: click({'x': 900, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 900, 'y': 283})\n",
+ "\u001b[92m18:12:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:36,132 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 213})\n",
+ "\u001b[92m18:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4589/7340 [166:17<99:41, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:36,835 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ "\u001b[92m18:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:37,470 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 91, 'y': 247}, {'x': 453, 'y': 271}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 91, 'y': 247}, {'x': 453, 'y': 271}]})\n",
+ "\u001b[92m18:12:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4591/7340 [166:19<99:35, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:38,136 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 561, 'y': 730}, {'x': 459, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 561, 'y': 730}, {'x': 459, 'y': 730}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:39,407 - agent.ComputerAgent - INFO - Computer: move({'x': 96, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 96, 'y': 186})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4593/7340 [166:22<99:30, 27.6 steps/min]\u001b[92m18:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:42,077 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 596})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 596})\n",
+ "\u001b[92m18:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4595/7340 [166:23<99:24, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:42,705 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 34})\n",
+ "\u001b[92m18:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:43,358 - agent.ComputerAgent - INFO - Computer: click({'x': 762, 'y': 224})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 762, 'y': 224})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4596/7340 [166:25<99:21, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:44,699 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4598/7340 [166:27<99:15, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:46,081 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 119})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 119})\n",
+ "\u001b[92m18:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:46,762 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 419})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 419})\n",
+ " 63%|█████████████████████████---------------| 4598/7340 [166:28<99:16, 27.6 steps/min]2025-08-11 18:12:47,420 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:12:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:48,475 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:12:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4600/7340 [166:30<99:10, 27.6 steps/min]2025-08-11 18:12:49,142 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:12:50,473 - agent.ComputerAgent - INFO - Computer: type({'text': 'mv -f video.mp4 video_with_subs_backup.mp4 && mv -f video_nosubs.mp4 video.mp4\\nffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name -of default=noprint_wrappers=1 video.mp4 || true\\nls -lh video.mp4 subtitles.srt video_with_subs_backup.mp4\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'mv -f video.mp4 video_with_subs_backup.mp4 && mv -f video_nosubs.mp4 video.mp4\\nffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name -of default=noprint_wrappers=1 video.mp4 || true\\nls -lh video.mp4 subtitles.srt video_with_subs_backup.mp4\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4600/7340 [166:32<99:11, 27.6 steps/min]2025-08-11 18:12:51,513 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4601/7340 [166:33<99:09, 27.6 steps/min]2025-08-11 18:12:52,160 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:12:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:53,480 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:12:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:12:54,122 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:12:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4601/7340 [166:36<99:11, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:55,480 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:12:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:12:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:57,510 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:12:57,512 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 63%|█████████████████████████---------------| 4601/7340 [166:39<99:12, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:12:58,170 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 219})\n",
+ "\u001b[92m18:12:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:12:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:12:59,568 - agent.ComputerAgent - INFO - Computer: click({'x': 588, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 588, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4602/7340 [166:42<99:10, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:00,957 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 494, 'x': 121, 'y': 111})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 494, 'x': 121, 'y': 111})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:01,624 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 757})\n",
+ "\u001b[92m18:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4605/7340 [166:43<99:01, 27.6 steps/min]2025-08-11 18:13:02,297 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 184})\n",
+ "2025-08-11 18:13:02,959 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:13:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4606/7340 [166:44<98:58, 27.6 steps/min]2025-08-11 18:13:03,996 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:13:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4607/7340 [166:46<98:56, 27.6 steps/min]\u001b[92m18:13:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:13:05,338 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:13:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:06,011 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:13:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4607/7340 [166:47<98:56, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:07,035 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:13:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:13:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4607/7340 [166:48<98:57, 27.6 steps/min]2025-08-11 18:13:07,729 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 305})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:13:09,050 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4607/7340 [166:50<98:58, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:09,740 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:10,441 - agent.ComputerAgent - INFO - Computer: click({'x': 220, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 220, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4608/7340 [166:52<98:56, 27.6 steps/min]\u001b[92m18:13:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:11,751 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:13:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:12,439 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:13:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:13:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb48f65f-d00e-465a-a0ea-394e844382ca/close \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4609/7340 [166:54<98:53, 27.6 steps/min]2025-08-11 18:13:13,139 - agent.ComputerAgent - INFO - Computer: click({'x': 118, 'y': 210})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 118, 'y': 210})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 63%|█████████████████████████---------------| 4610/7340 [166:55<98:51, 27.6 steps/min]2025-08-11 18:13:14,480 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:13:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:13:15,809 - agent.ComputerAgent - INFO - Computer: click({'x': 104, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 104, 'y': 186})\n",
+ "2025-08-11 18:13:16,450 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:13:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4611/7340 [166:58<98:49, 27.6 steps/min]\u001b[92m18:13:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:13:17,742 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:13:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:18,409 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:13:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4612/7340 [167:00<98:46, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]\u001b[92m18:13:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4612/7340 [167:02<98:48, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4612/7340 [167:04<98:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]\u001b[92m18:13:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]\n",
+ "2025-08-11 18:13:24,650 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name:format_tags=title -of default=noprint_wrappers=1 video.mp4 || true\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name:format_tags=title -of default=noprint_wrappers=1 video.mp4 || true\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:13:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/53e1a378-de8f-4a22-9dc0-27eef85d8356/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 63%|█████████████████████████---------------| 4612/7340 [167:08<98:51, 27.6 steps/min]\u001b[92m18:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:27,928 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 595})\n",
+ "2025-08-11 18:13:28,601 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 528, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 528, 'y': 142})\n",
+ "\u001b[92m18:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4613/7340 [167:10<98:49, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:13:29,487 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 40})\n",
+ "2025-08-11 18:13:30,151 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 35})\n",
+ "2025-08-11 18:13:31,002 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 52})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]2025-08-11 18:13:31,661 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:13:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:32,554 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:13:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4615/7340 [167:15<98:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]2025-08-11 18:13:34,632 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+x'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+x'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:37,424 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ " 63%|█████████████████████████---------------| 4618/7340 [167:19<98:37, 27.6 steps/min]\u001b[92m18:13:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:38,082 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:38,723 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 93, 'y': 246}, {'x': 426, 'y': 308}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 93, 'y': 246}, {'x': 426, 'y': 308}]})\n",
+ "\u001b[92m18:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:39,346 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:40,033 - agent.ComputerAgent - INFO - Computer: click({'x': 674, 'y': 342})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 674, 'y': 342})\n",
+ "2025-08-11 18:13:40,710 - agent.ComputerAgent - INFO - Computer: click({'x': 583, 'y': 161})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 583, 'y': 161})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4618/7340 [167:23<98:40, 27.6 steps/min]\u001b[92m18:13:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:13:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:13:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:43,198 - agent.ComputerAgent - INFO - Computer: click({'x': 615, 'y': 69})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 615, 'y': 69})\n",
+ "\u001b[92m18:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4621/7340 [167:24<98:30, 27.6 steps/min]2025-08-11 18:13:43,873 - agent.ComputerAgent - INFO - Computer: click({'x': 882, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 882, 'y': 283})\n",
+ "\u001b[92m18:13:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:44,538 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 510, 'y': 731}, {'x': 47, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 510, 'y': 731}, {'x': 47, 'y': 730}]})\n",
+ "2025-08-11 18:13:45,211 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:13:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4622/7340 [167:26<98:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/reset \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:27<98:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:13:47,417 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:13:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:29<98:22, 27.6 steps/min]2025-08-11 18:13:48,091 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:48,773 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:30<98:23, 27.6 steps/min]2025-08-11 18:13:49,430 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:13:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:50,111 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:13:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:31<98:24, 27.6 steps/min]2025-08-11 18:13:50,799 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:13:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:51,451 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:33<98:24, 27.6 steps/min]2025-08-11 18:13:52,162 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:13:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:52,843 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:13:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:53,491 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:13:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:35<98:26, 27.6 steps/min]2025-08-11 18:13:54,151 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:13:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:13:54,821 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:13:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:13:56,137 - agent.ComputerAgent - INFO - Computer: move({'x': 130, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 130, 'y': 186})\n",
+ " 63%|█████████████████████████---------------| 4624/7340 [167:37<98:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:13:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4625/7340 [167:38<98:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:13:58,551 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:13:58,552 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 63%|█████████████████████████---------------| 4625/7340 [167:40<98:25, 27.6 steps/min]\u001b[92m18:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:13:59,269 - agent.ComputerAgent - INFO - Computer: click({'x': 113, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 113, 'y': 215})\n",
+ " 63%|█████████████████████████---------------| 4626/7340 [167:41<98:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:01,116 - agent.ComputerAgent - INFO - Computer: type({'text': 'head -n 20 subtitles.srt\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'head -n 20 subtitles.srt\\n'})\n",
+ " 63%|█████████████████████████---------------| 4628/7340 [167:44<98:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 63%|█████████████████████████---------------| 4628/7340 [167:45<98:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:05,033 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 757})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:05,691 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:14:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4628/7340 [167:47<98:19, 27.6 steps/min]2025-08-11 18:14:06,372 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:14:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:07,045 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:14:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4629/7340 [167:48<98:16, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:08,374 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4629/7340 [167:50<98:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:09,015 - agent.ComputerAgent - INFO - Computer: click({'x': 102, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 102, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:11,029 - agent.ComputerAgent - INFO - Agent: The webpage has been saved as a PDF with margins set to “None.”\n",
+ "\n",
+ "Location: Desktop\n",
+ "Filename: LLM Powered Autonomous Agents _ Lil'Log.pdf\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The webpage has been saved as a PDF with margins set to “None.”\n",
+ "\n",
+ "Location: Desktop\n",
+ "Filename: LLM Powered Autonomous Agents _ Lil'Log.pdf\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:14:11,683 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 299\n",
+ " - prompt_tokens: 10776\n",
+ " - total_tokens: 11075\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0165\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 299\n",
+ " - prompt_tokens: 10776\n",
+ " - total_tokens: 11075\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0165\n",
+ " 63%|█████████████████████████---------------| 4630/7340 [167:53<98:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4631/7340 [167:54<98:13, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:13,693 - agent.ComputerAgent - INFO - Computer: double_click({'x': 331, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 331, 'y': 128})\n",
+ "\u001b[92m18:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:14,351 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:14:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:15,033 - agent.ComputerAgent - INFO - Computer: click({'x': 711, 'y': 118})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 711, 'y': 118})\n",
+ " 63%|█████████████████████████---------------| 4631/7340 [167:56<98:14, 27.6 steps/min]\u001b[92m18:14:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:16,073 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 347})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:18,742 - agent.ComputerAgent - INFO - Computer: type({'text': 'x'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'x'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4633/7340 [168:01<98:10, 27.6 steps/min]\u001b[92m18:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:20,072 - agent.ComputerAgent - INFO - Computer: click({'x': 389, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 389, 'y': 76})\n",
+ "\u001b[92m18:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 63%|█████████████████████████---------------| 4635/7340 [168:02<98:04, 27.6 steps/min]\u001b[92m18:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:21,230 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 48, 'y': 731}, {'x': 518, 'y': 731}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 48, 'y': 731}, {'x': 518, 'y': 731}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:22,565 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 91, 'y': 245}, {'x': 453, 'y': 263}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 91, 'y': 245}, {'x': 453, 'y': 263}]})\n",
+ " 63%|█████████████████████████---------------| 4636/7340 [168:04<98:01, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:14:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:23,223 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ " 63%|█████████████████████████---------------| 4638/7340 [168:05<97:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:24,882 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:14:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4639/7340 [168:06<97:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:25,562 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:14:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:26,253 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:14:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4639/7340 [168:08<97:53, 27.6 steps/min]2025-08-11 18:14:27,325 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:29,287 - agent.ComputerAgent - INFO - Computer: type({'text': 'ffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name -of default=noprint_wrappers=1 video.mp4\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ffprobe -hide_banner -loglevel error -select_streams s -show_entries stream=index,codec_name -of default=noprint_wrappers=1 video.mp4\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4639/7340 [168:12<97:56, 27.6 steps/min]\u001b[92m18:14:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:31,272 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:14:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:31,906 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:14:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:33,641 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 184})\n",
+ "\u001b[92m18:14:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:34,255 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4640/7340 [168:16<97:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:34,901 - agent.ComputerAgent - INFO - Computer: click({'x': 1013, 'y': 30})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1013, 'y': 30})\n",
+ "2025-08-11 18:14:35,555 - agent.ComputerAgent - INFO - Computer: move({'x': 884, 'y': 282})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 884, 'y': 282})\n",
+ "\u001b[92m18:14:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:36,857 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "2025-08-11 18:14:37,522 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:14:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4641/7340 [168:19<97:53, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:38,203 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 179})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:39,494 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4643/7340 [168:21<97:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:40,809 - agent.ComputerAgent - INFO - Computer: click({'x': 760, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 760, 'y': 34})\n",
+ "\u001b[92m18:14:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4652/7340 [168:23<97:17, 27.6 steps/min]\u001b[92m18:14:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:42,107 - agent.ComputerAgent - INFO - Computer: click({'x': 129, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 129, 'y': 554})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:42,795 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:14:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:14:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4653/7340 [168:25<97:15, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:14:44,180 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 189})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4654/7340 [168:26<97:12, 27.6 steps/min]\u001b[92m18:14:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:14:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:14:45,513 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 64})\n",
+ "\u001b[92m18:14:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c83605a3-e62d-48d7-8568-f181d5627773/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:46,202 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:14:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:46,883 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 240})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 240})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:48,887 - agent.ComputerAgent - INFO - Computer: type({'text': 'rfkill list all\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'rfkill list all\\n'})\n",
+ " 63%|█████████████████████████---------------| 4655/7340 [168:30<97:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:49,562 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:14:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:50,232 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:14:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:32<97:02, 27.6 steps/min]2025-08-11 18:14:50,884 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:14:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:14:51,537 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:14:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:33<97:03, 27.6 steps/min]2025-08-11 18:14:52,223 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:14:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:14:52,912 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:14:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:34<97:03, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:35<97:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:36<97:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/reset \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:37<97:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]\u001b[92m18:14:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:14:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 18:14:58,623 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:40<97:07, 27.6 steps/min]2025-08-11 18:14:59,277 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:14:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]2025-08-11 18:15:00,134 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:15:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.6 steps/min]\n",
+ "2025-08-11 18:15:00,802 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:15:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:15:01,502 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:15:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:43<97:09, 27.6 steps/min]\u001b[92m18:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:04,124 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:15:04,125 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:46<97:10, 27.6 steps/min]\u001b[92m18:15:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:05,456 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 738})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 738})\n",
+ "2025-08-11 18:15:06,097 - agent.ComputerAgent - INFO - Computer: click({'x': 443, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 443, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:07,404 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:15:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:15:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:08,080 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 134})\n",
+ " 63%|█████████████████████████---------------| 4658/7340 [168:49<97:12, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:08,748 - agent.ComputerAgent - INFO - Computer: click({'x': 102, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 102, 'y': 148})\n",
+ "2025-08-11 18:15:09,415 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 218})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 218})\n",
+ "\u001b[92m18:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4661/7340 [168:51<97:03, 27.6 steps/min]2025-08-11 18:15:10,742 - agent.ComputerAgent - INFO - Computer: double_click({'x': 331, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 331, 'y': 128})\n",
+ "2025-08-11 18:15:11,385 - agent.ComputerAgent - INFO - Computer: move({'x': 115, 'y': 210})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 115, 'y': 210})\n",
+ " 64%|█████████████████████████---------------| 4663/7340 [168:53<96:57, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:12,546 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 245})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4665/7340 [168:54<96:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:15:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:14,554 - agent.ComputerAgent - INFO - Computer: click({'x': 228, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 228, 'y': 52})\n",
+ "\u001b[92m18:15:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:15,893 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:15:17,239 - agent.ComputerAgent - INFO - Agent: I completed the task.\n",
+ "\n",
+ "What I did:\n",
+ "- Extracted the embedded subtitle track from video.mp4 into an SRT file named subtitles.srt in the same directory as the video.\n",
+ "- Created a new copy of the video with the subtitle track removed and replaced the original:\n",
+ " - video.mp4 now has no embedded subtitles.\n",
+ " - A backup of the original (with subtitles) is saved as video_with_subs_backup.mp4.\n",
+ "- Verified the SRT contains the text and that video.mp4 has no subtitle streams.\n",
+ "\n",
+ "Location:\n",
+ "- Files are in your home directory:\n",
+ " - ~/video.mp4 (no subtitles)\n",
+ " - ~/subtitles.srt (extracted subtitles)\n",
+ " - ~/video_with_subs_backup.mp4 (original with subtitles, for backup)\n",
+ "\n",
+ "Note: Your default video player shows a codec error for H.264/AAC. Use VLC (already installed on the dock) to play the video, or install the needed codecs if you prefer the default player.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I completed the task.\n",
+ "\n",
+ "What I did:\n",
+ "- Extracted the embedded subtitle track from video.mp4 into an SRT file named subtitles.srt in the same directory as the video.\n",
+ "- Created a new copy of the video with the subtitle track removed and replaced the original:\n",
+ " - video.mp4 now has no embedded subtitles.\n",
+ " - A backup of the original (with subtitles) is saved as video_with_subs_backup.mp4.\n",
+ "- Verified the SRT contains the text and that video.mp4 has no subtitle streams.\n",
+ "\n",
+ "Location:\n",
+ "- Files are in your home directory:\n",
+ " - ~/video.mp4 (no subtitles)\n",
+ " - ~/subtitles.srt (extracted subtitles)\n",
+ " - ~/video_with_subs_backup.mp4 (original with subtitles, for backup)\n",
+ "\n",
+ "Note: Your default video player shows a codec error for H.264/AAC. Use VLC (already installed on the dock) to play the video, or install the needed codecs if you prefer the default player.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:15:17,924 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1106\n",
+ " - prompt_tokens: 13086\n",
+ " - total_tokens: 14192\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 896\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0274\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1106\n",
+ " - prompt_tokens: 13086\n",
+ " - total_tokens: 14192\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 896\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0274\n",
+ " 64%|█████████████████████████---------------| 4667/7340 [168:59<96:47, 27.6 steps/min]2025-08-11 18:15:18,623 - agent.ComputerAgent - INFO - Computer: move({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 85, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4669/7340 [169:03<96:42, 27.6 steps/min]\u001b[92m18:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:22,612 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 293})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 293})\n",
+ "\u001b[92m18:15:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:23,316 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:23,983 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:15:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:15:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:24,668 - agent.ComputerAgent - INFO - Computer: move({'x': 866, 'y': 204})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 866, 'y': 204})\n",
+ "\u001b[92m18:15:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4670/7340 [169:06<96:41, 27.6 steps/min]\u001b[92m18:15:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:25,305 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ "2025-08-11 18:15:25,977 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 39})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 39})\n",
+ "2025-08-11 18:15:26,675 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 241})\n",
+ "2025-08-11 18:15:27,303 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:15:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4672/7340 [169:09<96:35, 27.6 steps/min]2025-08-11 18:15:28,446 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:15:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4675/7340 [169:10<96:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:15:29,087 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:15:29,735 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4675/7340 [169:11<96:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4675/7340 [169:12<96:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:31,525 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:15:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:32,576 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:15:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:15:33,295 - agent.ComputerAgent - INFO - Computer: double_click({'x': 21, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 21, 'y': 195})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4677/7340 [169:15<96:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:15:34,304 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:15:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:15:35,623 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 64%|█████████████████████████---------------| 4678/7340 [169:17<96:20, 27.6 steps/min]2025-08-11 18:15:36,290 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:15:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:15:36,963 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:15:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4679/7340 [169:19<96:17, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:38,306 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:15:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:15:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:38,972 - agent.ComputerAgent - INFO - Computer: click({'x': 760, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 760, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b01cd4a6-3203-476b-8ece-c651b889f821/close \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4679/7340 [169:20<96:18, 27.6 steps/min]2025-08-11 18:15:40,304 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:15:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4680/7340 [169:22<96:16, 27.6 steps/min]2025-08-11 18:15:41,583 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:15:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:15:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4680/7340 [169:24<96:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4680/7340 [169:25<96:17, 27.6 steps/min]2025-08-11 18:15:43,945 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:15:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4680/7340 [169:27<96:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]2025-08-11 18:15:46,993 - agent.ComputerAgent - INFO - Computer: type({'text': 'A1:L2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'A1:L2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.59s/it]\u001b[92m18:15:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4680/7340 [169:29<96:20, 27.6 steps/min]2025-08-11 18:15:48,534 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]\u001b[92m18:15:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4681/7340 [169:31<96:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ " 64%|█████████████████████████---------------| 4681/7340 [169:32<96:18, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4681/7340 [169:33<96:18, 27.6 steps/min]\u001b[92m18:15:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:52,401 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 554})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:53,738 - agent.ComputerAgent - INFO - Computer: click({'x': 569, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 569, 'y': 446})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4681/7340 [169:35<96:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:15:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:54,397 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 309})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 309})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:15:55,680 - agent.ComputerAgent - INFO - Computer: type({'text': 'rfkill list all\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'rfkill list all\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:15:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:56,987 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 490})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 490})\n",
+ "\u001b[92m18:15:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4687/7340 [169:38<96:01, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:15:57,677 - agent.ComputerAgent - INFO - Computer: double_click({'x': 93, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 93, 'y': 248})\n",
+ "\u001b[92m18:15:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:15:58,314 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:15:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:15:59,000 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4690/7340 [169:41<95:52, 27.6 steps/min]\u001b[92m18:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4692/7340 [169:44<95:47, 27.6 steps/min]\u001b[92m18:16:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:16:02,930 - agent.ComputerAgent - INFO - Computer: move({'x': 821, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 821, 'y': 284})\n",
+ "\u001b[92m18:16:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:16:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:16:03,586 - agent.ComputerAgent - INFO - Computer: click({'x': 262, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 262, 'y': 151})\n",
+ "2025-08-11 18:16:04,238 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 317})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 317})\n",
+ "\u001b[92m18:16:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4692/7340 [169:45<95:48, 27.6 steps/min]\u001b[92m18:16:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:16:04,884 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 52})\n",
+ "2025-08-11 18:16:05,566 - agent.ComputerAgent - INFO - Computer: click({'x': 760, 'y': 187})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 760, 'y': 187})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/941d9ec3-7c28-40f6-b948-70db95115571/close \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4695/7340 [169:47<95:39, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d4e75282-c303-4f9a-92ca-6ac64361b793/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4697/7340 [169:49<95:33, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:16:08,831 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<95:34, 27.7 steps/min]2025-08-11 18:16:09,468 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:16:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:16:10,133 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:16:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.71s/it]27.7 steps/min]2025-08-11 18:16:10,896 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:16:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:16:11,585 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:16:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4698/7340 [169:53<95:32, 27.7 steps/min]2025-08-11 18:16:12,472 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.72s/it]INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:16:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4698/7340 [169:54<95:33, 27.7 steps/min]2025-08-11 18:16:13,158 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:16:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:16:14,039 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:16:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]27.6 steps/min]\n",
+ "2025-08-11 18:16:15,484 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:16:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4698/7340 [169:57<95:34, 27.6 steps/min]2025-08-11 18:16:16,150 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:16:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:16:17,212 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4698/7340 [169:59<95:35, 27.6 steps/min]2025-08-11 18:16:17,901 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:16:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:16:18,600 - agent.ComputerAgent - INFO - Computer: double_click({'x': 331, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 331, 'y': 128})\n",
+ "2025-08-11 18:16:19,274 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:16:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4698/7340 [170:01<95:36, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4699/7340 [170:02<95:33, 27.6 steps/min]\u001b[92m18:16:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:16:21,162 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d107e49-ae48-4b20-a0a1-7facc71e66f7/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4699/7340 [170:04<95:35, 27.6 steps/min]\u001b[92m18:16:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 503 Service Unavailable\"\n",
+ "INFO:openai._base_client:Retrying request to /chat/completions in 0.457190 seconds\n",
+ " 64%|█████████████████████████---------------| 4700/7340 [170:05<95:32, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/invoke \"HTTP/1.1 200 OK\"\n",
+ "ERROR:asyncio:Unclosed connection\n",
+ "client_connection: Connection\n",
+ " 64%|█████████████████████████---------------| 4702/7340 [170:06<95:26, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:16:25,320 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:16:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a3e700f1-e7d1-46c4-96eb-b69f07a81fb3/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]2025-08-11 18:16:26,749 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:16:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4702/7340 [170:08<95:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]27.6 steps/min]2025-08-11 18:16:29,788 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:16:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ " 64%|█████████████████████████---------------| 4702/7340 [170:12<95:29, 27.6 steps/min]\u001b[92m18:16:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<95:29, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4702/7340 [170:17<95:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.70s/it]2025-08-11 18:16:36,996 - agent.ComputerAgent - INFO - Computer: type({'text': 'bluetoothctl list\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'bluetoothctl list\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.46s/it]\n",
+ " 64%|█████████████████████████---------------| 4702/7340 [170:19<95:33, 27.6 steps/min]\u001b[92m18:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4703/7340 [170:21<95:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:16:41,120 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4703/7340 [170:23<95:32, 27.6 steps/min]\u001b[92m18:16:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:16:42,440 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4703/7340 [170:24<95:32, 27.6 steps/min]\u001b[92m18:16:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:16:43,637 - agent.ComputerAgent - INFO - Computer: click({'x': 676, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 676, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4703/7340 [170:26<95:34, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:16:45,651 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:16:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:30<95:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:16:50,409 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:16:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:32<95:33, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:34<95:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:36<95:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:16:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:40<95:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:41<95:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:42<95:39, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:43<95:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:44<95:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:46<95:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:48<95:42, 27.5 steps/min]\u001b[92m18:17:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:49<95:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:51<95:44, 27.5 steps/min]\u001b[92m18:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:10,270 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:17:10,271 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 384})\n",
+ " 64%|█████████████████████████---------------| 4704/7340 [170:52<95:45, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:11,629 - agent.ComputerAgent - INFO - Computer: type({'text': 'systemctl is-active bluetooth\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'systemctl is-active bluetooth\\n'})\n",
+ " 64%|█████████████████████████---------------| 4706/7340 [170:54<95:39, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4706/7340 [170:55<95:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4706/7340 [170:56<95:40, 27.5 steps/min]\u001b[92m18:17:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:15,339 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 576})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 576})\n",
+ " 64%|█████████████████████████---------------| 4706/7340 [170:57<95:41, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:17,047 - agent.ComputerAgent - INFO - Computer: click({'x': 708, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 708, 'y': 74})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/reset \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4707/7340 [170:58<95:38, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4708/7340 [170:59<95:35, 27.5 steps/min]\u001b[92m18:17:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:18,712 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:19,364 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4708/7340 [171:01<95:36, 27.5 steps/min]2025-08-11 18:17:20,015 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:17:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:20,654 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:17:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4709/7340 [171:02<95:33, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:21,345 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cfe4e097-0434-4025-a00a-78e26d753e51/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4709/7340 [171:03<95:34, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:22,520 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:23,205 - agent.ComputerAgent - INFO - Computer: double_click({'x': 422, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 422, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4710/7340 [171:04<95:31, 27.5 steps/min]2025-08-11 18:17:23,853 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:17:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:24,545 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4711/7340 [171:06<95:29, 27.5 steps/min]2025-08-11 18:17:25,828 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:26,517 - agent.ComputerAgent - INFO - Computer: click({'x': 882, 'y': 285})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 882, 'y': 285})\n",
+ "2025-08-11 18:17:27,178 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:17:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4712/7340 [171:08<95:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:28,352 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 199})\n",
+ " 64%|█████████████████████████---------------| 4713/7340 [171:10<95:24, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:29,662 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4714/7340 [171:11<95:21, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:30,311 - agent.ComputerAgent - INFO - Computer: click({'x': 993, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 993, 'y': 34})\n",
+ " 64%|█████████████████████████---------------| 4714/7340 [171:12<95:22, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:31,935 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4715/7340 [171:13<95:19, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:32,624 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 347})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:33,291 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:17:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4715/7340 [171:15<95:20, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:33,943 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:34,606 - agent.ComputerAgent - INFO - Computer: click({'x': 580, 'y': 471})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 580, 'y': 471})\n",
+ " 64%|█████████████████████████---------------| 4716/7340 [171:16<95:17, 27.5 steps/min]\u001b[92m18:17:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:35,774 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4717/7340 [171:17<95:15, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:37,127 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:17:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4718/7340 [171:18<95:12, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:37,803 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:17:37,804 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 753})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:38,482 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:17:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4718/7340 [171:20<95:13, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:39,633 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:17:39,634 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 761})\n",
+ " 64%|█████████████████████████---------------| 4719/7340 [171:21<95:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4720/7340 [171:22<95:07, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:41,352 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:17:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:42,654 - agent.ComputerAgent - INFO - Computer: click({'x': 192, 'y': 541})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 192, 'y': 541})\n",
+ " 64%|█████████████████████████---------------| 4720/7340 [171:24<95:08, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:43,293 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:17:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:44,631 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:17:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:17:45,303 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4721/7340 [171:27<95:06, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:45,972 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:17:46,627 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:47,932 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo systemctl enable --now bluetooth\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo systemctl enable --now bluetooth\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4721/7340 [171:29<95:08, 27.5 steps/min]\u001b[92m18:17:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:17:48,574 - agent.ComputerAgent - INFO - Computer: click({'x': 544, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 544, 'y': 250})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:49,928 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 64%|█████████████████████████---------------| 4723/7340 [171:31<95:02, 27.5 steps/min]2025-08-11 18:17:50,572 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:17:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:32<94:56, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/26d5566b-d949-4b71-accb-45197078f693/reset \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:34<94:57, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:17:54,419 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:17:54,420 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:38<94:59, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:57,473 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:17:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:17:58,133 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:39<95:00, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:17:59,403 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:17:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:17:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:00,072 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:18:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:00,700 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:18:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:42<95:01, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:01,348 - agent.ComputerAgent - INFO - Computer: click({'x': 324, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 324, 'y': 164})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:04,093 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+r'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 64%|█████████████████████████---------------| 4725/7340 [171:46<95:04, 27.5 steps/min]\u001b[92m18:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:05,411 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 367})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 367})\n",
+ "2025-08-11 18:18:06,185 - agent.ComputerAgent - INFO - Computer: click({'x': 156, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 156, 'y': 35})\n",
+ "\u001b[92m18:18:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:07,596 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:08,955 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:18:10,375 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "2025-08-11 18:18:11,033 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:18:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:11,693 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:18:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:12,363 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 64%|█████████████████████████---------------| 4726/7340 [171:55<95:05, 27.5 steps/min]\u001b[92m18:18:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:14,303 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:18:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:14,965 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:18:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:15,655 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:18:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:16,306 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 287})\n",
+ "2025-08-11 18:18:16,977 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 277})\n",
+ "2025-08-11 18:18:17,651 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 45})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 45})\n",
+ " 64%|█████████████████████████---------------| 4729/7340 [171:59<94:57, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:18,659 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:18:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 64%|█████████████████████████---------------| 4732/7340 [172:01<94:48, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:21,503 - agent.ComputerAgent - INFO - Computer: type({'text': 'exclude'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'exclude'})\n",
+ " 64%|█████████████████████████---------------| 4732/7340 [172:03<94:49, 27.5 steps/min]\u001b[92m18:18:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:22,154 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 136})\n",
+ " 64%|█████████████████████████---------------| 4733/7340 [172:04<94:46, 27.5 steps/min]\u001b[92m18:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:22,806 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:18:22,807 - agent.ComputerAgent - INFO - Computer: click({'x': 793, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 793, 'y': 736})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:05<94:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:18:24,503 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:18:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:06<94:41, 27.5 steps/min]2025-08-11 18:18:25,159 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:18:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:25,794 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:18:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:26,465 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:18:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:08<94:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:28,484 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:18:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:10<94:43, 27.5 steps/min]2025-08-11 18:18:29,109 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:18:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:29,754 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:18:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:13<94:45, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:32,737 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 316})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 316})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:36,231 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4735/7340 [172:19<94:48, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:38,232 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 287})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 287})\n",
+ "2025-08-11 18:18:38,919 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:40,252 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "\u001b[92m18:18:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4736/7340 [172:22<94:46, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:40,919 - agent.ComputerAgent - INFO - Computer: click({'x': 842, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 842, 'y': 571})\n",
+ "\u001b[92m18:18:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:18:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:42,262 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:18:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:42,942 - agent.ComputerAgent - INFO - Computer: click({'x': 446, 'y': 714})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 446, 'y': 714})\n",
+ " 65%|█████████████████████████---------------| 4737/7340 [172:24<94:44, 27.5 steps/min]\u001b[92m18:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:44,083 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 554, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 554, 'y': 139})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4739/7340 [172:26<94:38, 27.5 steps/min]\u001b[92m18:18:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:18:46,099 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4740/7340 [172:27<94:36, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:18:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:47,391 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcdab7d3-0448-49dd-b2db-f79a7c74a08b/close \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4741/7340 [172:29<94:33, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:18:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<94:30, 27.5 steps/min]2025-08-11 18:18:49,330 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:18:49,999 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:18:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:51,507 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+='})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+='})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.27s/it]27.5 steps/min]\u001b[92m18:18:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:52,190 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 508})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 508})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.86s/it]\u001b[92m18:18:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:18:53,760 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:18:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:18:54,670 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.74s/it]INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 65%|█████████████████████████---------------| 4742/7340 [172:36<94:33, 27.5 steps/min]\u001b[92m18:18:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.49s/it]\n",
+ "2025-08-11 18:18:55,362 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:18:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:18:56,822 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 65%|█████████████████████████---------------| 4743/7340 [172:39<94:32, 27.5 steps/min]\u001b[92m18:18:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:18:59,133 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ "\u001b[92m18:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4743/7340 [172:41<94:33, 27.5 steps/min]\u001b[92m18:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:00,523 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 659})\n",
+ "2025-08-11 18:19:01,183 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:19:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4743/7340 [172:42<94:34, 27.5 steps/min]2025-08-11 18:19:01,840 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:19:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:19:02,481 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:19:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4744/7340 [172:45<94:32, 27.5 steps/min]\u001b[92m18:19:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:04,471 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:19:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:05,104 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:19:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4744/7340 [172:47<94:33, 27.5 steps/min]\u001b[92m18:19:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:06,443 - agent.ComputerAgent - INFO - Computer: click({'x': 65, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 65, 'y': 81})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4744/7340 [172:48<94:33, 27.5 steps/min]2025-08-11 18:19:07,596 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:19:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4745/7340 [172:49<94:31, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:08,923 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ " 65%|█████████████████████████---------------| 4745/7340 [172:50<94:31, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4746/7340 [172:51<94:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4750/7340 [172:52<94:15, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb6bac67-5bda-4f5d-993a-c52b9313f5d1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4750/7340 [172:53<94:16, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:12,593 - agent.ComputerAgent - INFO - Computer: click({'x': 717, 'y': 640})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 717, 'y': 640})\n",
+ "\u001b[92m18:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:13,256 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ " 65%|█████████████████████████---------------| 4750/7340 [172:54<94:17, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:14,390 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:19:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4752/7340 [172:56<94:10, 27.5 steps/min]2025-08-11 18:19:15,056 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 197})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 197})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:15,683 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:19:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4752/7340 [172:57<94:11, 27.5 steps/min]2025-08-11 18:19:16,377 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 209})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:17,046 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:19:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4753/7340 [172:58<94:09, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:17,737 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 362})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4754/7340 [173:00<94:06, 27.5 steps/min]2025-08-11 18:19:19,123 - agent.ComputerAgent - INFO - Computer: move({'x': 869, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 869, 'y': 202})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:19,790 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:19:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4755/7340 [173:02<94:04, 27.5 steps/min]\u001b[92m18:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:21,812 - agent.ComputerAgent - INFO - Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.80s/it]2025-08-11 18:19:22,490 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:19:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4756/7340 [173:04<94:01, 27.5 steps/min]2025-08-11 18:19:23,160 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:19:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]27.5 steps/min]\n",
+ " 65%|█████████████████████████---------------| 4757/7340 [173:08<94:00, 27.5 steps/min]\u001b[92m18:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:27,605 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4757/7340 [173:09<94:01, 27.5 steps/min]2025-08-11 18:19:28,262 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:28,919 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4758/7340 [173:10<93:58, 27.5 steps/min]2025-08-11 18:19:29,617 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 592, 'scroll_x': 0, 'x': 708, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 592, 'scroll_x': 0, 'x': 708, 'y': 131})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4758/7340 [173:12<93:59, 27.5 steps/min]2025-08-11 18:19:31,290 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:19:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:19:31,930 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:19:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4759/7340 [173:13<93:56, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4759/7340 [173:14<93:57, 27.5 steps/min]\u001b[92m18:19:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:33,966 - agent.ComputerAgent - INFO - Computer: click({'x': 324, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 324, 'y': 539})\n",
+ " 65%|█████████████████████████---------------| 4760/7340 [173:16<93:55, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:36,141 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:19:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4760/7340 [173:17<93:55, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:37,309 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:19:37,310 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:38,669 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4760/7340 [173:20<93:57, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:39,840 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:19:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|█████████████████████████---------------| 4761/7340 [173:21<93:54, 27.5 steps/min]2025-08-11 18:19:40,510 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:41,544 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:19:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:42,902 - agent.ComputerAgent - INFO - Computer: type({'text': 'systemctl is-active bluetooth\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'systemctl is-active bluetooth\\n'})\n",
+ " 65%|█████████████████████████---------------| 4761/7340 [173:24<93:56, 27.5 steps/min]2025-08-11 18:19:43,573 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 128})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:45,585 - agent.ComputerAgent - INFO - Computer: type({'text': 'Bing'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Bing'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:47,554 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 18:19:48,183 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 75\n",
+ " - prompt_tokens: 8048\n",
+ " - total_tokens: 8123\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 64\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0108\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 75\n",
+ " - prompt_tokens: 8048\n",
+ " - total_tokens: 8123\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 64\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0108\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4763/7340 [173:29<93:52, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:19:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:50,235 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 429})\n",
+ " 65%|█████████████████████████---------------| 4765/7340 [173:31<93:46, 27.5 steps/min]\u001b[92m18:19:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:19:50,937 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:52,236 - agent.ComputerAgent - INFO - Computer: type({'text': '**/__pycache__'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '**/__pycache__'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|█████████████████████████---------------| 4766/7340 [173:34<93:44, 27.5 steps/min]2025-08-11 18:19:53,573 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:19:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:19:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:54,258 - agent.ComputerAgent - INFO - Computer: click({'x': 65, 'y': 71})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 65, 'y': 71})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4769/7340 [173:36<93:35, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4769/7340 [173:37<93:36, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:19:56,922 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:19:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:19:57,570 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:19:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|█████████████████████████---------------| 4769/7340 [173:39<93:37, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:19:58,261 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:00,316 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 649})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4779/7340 [173:42<93:05, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:02,302 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4780/7340 [173:44<93:02, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:02,981 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:20:03,657 - agent.ComputerAgent - INFO - Computer: click({'x': 107, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 107, 'y': 213})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4780/7340 [173:46<93:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:04,944 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:20:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:20:05,630 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:20:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fcabb748-f7c9-4c69-8a02-2f9ce5d34b95/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4781/7340 [173:47<93:01, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:20:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:07,412 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 624, 'scroll_x': 0, 'x': 466, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 624, 'scroll_x': 0, 'x': 466, 'y': 133})\n",
+ " 65%|██████████████████████████--------------| 4781/7340 [173:49<93:02, 27.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4782/7340 [173:51<92:59, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:20:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4782/7340 [173:52<93:00, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:11,901 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:20:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4782/7340 [173:53<93:01, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.07s/it]2025-08-11 18:20:12,636 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 386})\n",
+ "2025-08-11 18:20:13,266 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:20:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4782/7340 [173:55<93:01, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.82s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [173:56<92:59, 27.5 steps/min]2025-08-11 18:20:15,132 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.73s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [173:58<93:00, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [173:59<93:01, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:20:19,283 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:20:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [174:01<93:02, 27.5 steps/min]\u001b[92m18:20:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/reset \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [174:02<93:02, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:20:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4783/7340 [174:04<93:03, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:23,783 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:20:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:20:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:24,485 - agent.ComputerAgent - INFO - Computer: click({'x': 1013, 'y': 30})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1013, 'y': 30})\n",
+ " 65%|██████████████████████████--------------| 4784/7340 [174:07<93:01, 27.5 steps/min]\u001b[92m18:20:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:26,145 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 89})\n",
+ " 65%|██████████████████████████--------------| 4784/7340 [174:08<93:02, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4785/7340 [174:09<92:59, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:20:29,126 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:20:29,127 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4785/7340 [174:11<93:00, 27.5 steps/min]\u001b[92m18:20:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:30,432 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:20:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4785/7340 [174:12<93:01, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:31,629 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 35})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:20:32,291 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:20:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4785/7340 [174:14<93:02, 27.5 steps/min]2025-08-11 18:20:32,989 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:20:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4786/7340 [174:16<92:59, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:20:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:20:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:35,834 - agent.ComputerAgent - INFO - Computer: click({'x': 361, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 361, 'y': 245})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4786/7340 [174:18<93:00, 27.5 steps/min]\u001b[92m18:20:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4787/7340 [174:19<92:58, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:39,311 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:20:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/06215fad-881b-4e96-84a9-854f2d453fc5/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4787/7340 [174:21<92:59, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:40,014 - agent.ComputerAgent - INFO - Computer: click({'x': 536, 'y': 508})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 536, 'y': 508})\n",
+ " 65%|██████████████████████████--------------| 4787/7340 [174:22<92:59, 27.5 steps/min]\u001b[92m18:20:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:41,197 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 527})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4788/7340 [174:23<92:56, 27.5 steps/min]2025-08-11 18:20:41,813 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:20:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4789/7340 [174:26<92:55, 27.5 steps/min]\u001b[92m18:20:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:45,554 - agent.ComputerAgent - INFO - Computer: click({'x': 880, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 880, 'y': 283})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4789/7340 [174:27<92:55, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:20:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:20:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4790/7340 [174:28<92:52, 27.5 steps/min]2025-08-11 18:20:47,258 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 105})\n",
+ "2025-08-11 18:20:47,932 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:20:48,624 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:20:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4790/7340 [174:30<92:53, 27.4 steps/min]2025-08-11 18:20:49,332 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:20:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4791/7340 [174:31<92:51, 27.5 steps/min]\u001b[92m18:20:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:20:51,057 - agent.ComputerAgent - INFO - Computer: double_click({'x': 437, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 437, 'y': 91})\n",
+ " 65%|██████████████████████████--------------| 4792/7340 [174:34<92:49, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:20:54,433 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:20:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:20:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4792/7340 [174:36<92:50, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:20:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:20:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:20:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:20:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:20:57,756 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 358})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 358})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4792/7340 [174:39<92:52, 27.4 steps/min]2025-08-11 18:20:58,389 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:20:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4793/7340 [174:40<92:49, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:00,296 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:21:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4793/7340 [174:42<92:50, 27.4 steps/min]\u001b[92m18:21:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:01,598 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 599})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 599})\n",
+ " 65%|██████████████████████████--------------| 4794/7340 [174:44<92:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4794/7340 [174:45<92:48, 27.4 steps/min]2025-08-11 18:21:04,783 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:21:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4794/7340 [174:46<92:49, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:05,441 - agent.ComputerAgent - INFO - Computer: double_click({'x': 914, 'y': 678})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 914, 'y': 678})\n",
+ " 65%|██████████████████████████--------------| 4795/7340 [174:48<92:47, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:08,101 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:21:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4795/7340 [174:49<92:47, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/89880137-9134-4973-9389-b3535802254c/reset \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4795/7340 [174:50<92:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:21:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4795/7340 [174:52<92:48, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:11,312 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:21:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:21:11,974 - agent.ComputerAgent - INFO - Computer: click({'x': 425, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 425, 'y': 183})\n",
+ " 65%|██████████████████████████--------------| 4796/7340 [174:54<92:46, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:14,643 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:21:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4796/7340 [174:56<92:47, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:21:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:15,338 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 141})\n",
+ " 65%|██████████████████████████--------------| 4796/7340 [174:57<92:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4797/7340 [174:58<92:45, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ " 65%|██████████████████████████--------------| 4797/7340 [174:59<92:45, 27.4 steps/min]2025-08-11 18:21:18,723 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4797/7340 [175:00<92:46, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:20,094 - agent.ComputerAgent - INFO - Computer: move({'x': 856, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 856, 'y': 410})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:21,396 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:21:21,396 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 65%|██████████████████████████--------------| 4797/7340 [175:03<92:47, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:22,025 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:21:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4799/7340 [175:05<92:42, 27.4 steps/min]\u001b[92m18:21:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:24,222 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 104})\n",
+ " 65%|██████████████████████████--------------| 4800/7340 [175:08<92:40, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:27,406 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:21:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4800/7340 [175:09<92:41, 27.4 steps/min]2025-08-11 18:21:28,544 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4800/7340 [175:10<92:41, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:29,192 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 123})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4800/7340 [175:11<92:42, 27.4 steps/min]2025-08-11 18:21:30,543 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:21:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4801/7340 [175:12<92:39, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:32,861 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 65%|██████████████████████████--------------| 4801/7340 [175:15<92:40, 27.4 steps/min]\u001b[92m18:21:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:21:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:34,181 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 193})\n",
+ " 65%|██████████████████████████--------------| 4802/7340 [175:16<92:38, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:21:35,859 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:21:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4803/7340 [175:18<92:35, 27.4 steps/min]\u001b[92m18:21:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4803/7340 [175:19<92:36, 27.4 steps/min]\u001b[92m18:21:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:38,198 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:21:38,199 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 645})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 645})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4803/7340 [175:20<92:37, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:40,149 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:21:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4804/7340 [175:21<92:34, 27.4 steps/min]2025-08-11 18:21:41,194 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4804/7340 [175:22<92:35, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4804/7340 [175:24<92:35, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:44,116 - agent.ComputerAgent - INFO - Agent: I opened VS Code, went to Settings, and under Files: Exclude added the pattern:\n",
+ "**/__pycache__\n",
+ "This hides all __pycache__ folders from the Explorer view at the user level.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I opened VS Code, went to Settings, and under Files: Exclude added the pattern:\n",
+ "**/__pycache__\n",
+ "This hides all __pycache__ folders from the Explorer view at the user level.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:21:44,743 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 756\n",
+ " - prompt_tokens: 11464\n",
+ " - total_tokens: 12220\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 10752\n",
+ " - response_cost: $0.0098\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 756\n",
+ " - prompt_tokens: 11464\n",
+ " - total_tokens: 12220\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 10752\n",
+ " - response_cost: $0.0098\n",
+ "\u001b[92m18:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 65%|██████████████████████████--------------| 4805/7340 [175:26<92:33, 27.4 steps/min]2025-08-11 18:21:45,399 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 523, 'x': 630, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 523, 'x': 630, 'y': 137})\n",
+ "2025-08-11 18:21:46,022 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:21:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4805/7340 [175:27<92:34, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 65%|██████████████████████████--------------| 4806/7340 [175:28<92:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:21:48,235 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 577})\n",
+ " 65%|██████████████████████████--------------| 4806/7340 [175:29<92:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4812/7340 [175:30<92:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/afdc88be-f209-412c-8905-25f3e8cbf43a/close \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4812/7340 [175:31<92:13, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4812/7340 [175:33<92:13, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:21:52,151 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:21:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4812/7340 [175:34<92:14, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4812/7340 [175:38<92:16, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]27.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f0aa6a3e-e61f-49b1-ade9-e8150e333596/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:21:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:21:59,739 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "2025-08-11 18:22:00,381 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 36})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 66%|██████████████████████████--------------| 4814/7340 [175:43<92:12, 27.4 steps/min]\u001b[92m18:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:22:02,323 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -524, 'x': 410, 'y': 512})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -524, 'x': 410, 'y': 512})\n",
+ " 66%|██████████████████████████--------------| 4817/7340 [175:44<92:02, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4817/7340 [175:45<92:03, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:03,961 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "\u001b[92m18:22:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4817/7340 [175:46<92:03, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.93s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:06,671 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:22:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4818/7340 [175:48<92:01, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.79s/it]2025-08-11 18:22:07,817 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.55s/it]27.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:22:09,241 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:22:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4819/7340 [175:51<91:59, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 66%|██████████████████████████--------------| 4819/7340 [175:52<92:00, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:11,101 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:11,787 - agent.ComputerAgent - INFO - Computer: click({'x': 464, 'y': 461})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 464, 'y': 461})\n",
+ " 66%|██████████████████████████--------------| 4819/7340 [175:53<92:00, 27.4 steps/min]\u001b[92m18:22:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:12,448 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4821/7340 [175:54<91:54, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:14,248 - agent.ComputerAgent - INFO - Agent: Chrome’s Safe Browsing is now set to Enhanced protection. You’ll get real-time warnings about potentially harmful or deceptive websites, dangerous downloads, and malicious extensions.\n",
+ "\n",
+ "If you ever want to change this: Chrome menu > Settings > Privacy and security > Security > Safe Browsing.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Chrome’s Safe Browsing is now set to Enhanced protection. You’ll get real-time warnings about potentially harmful or deceptive websites, dangerous downloads, and malicious extensions.\n",
+ "\n",
+ "If you ever want to change this: Chrome menu > Settings > Privacy and security > Security > Safe Browsing.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:22:14,920 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 514\n",
+ " - prompt_tokens: 6726\n",
+ " - total_tokens: 7240\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0135\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 514\n",
+ " - prompt_tokens: 6726\n",
+ " - total_tokens: 7240\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0135\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4823/7340 [175:57<91:49, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:16,260 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:22:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4823/7340 [175:58<91:50, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4824/7340 [175:59<91:47, 27.4 steps/min]2025-08-11 18:22:17,920 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:22:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:19,103 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ " 66%|██████████████████████████--------------| 4824/7340 [176:00<91:48, 27.4 steps/min]\u001b[92m18:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:19,764 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ " 66%|██████████████████████████--------------| 4824/7340 [176:01<91:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4838/7340 [176:02<91:02, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d9bc2461-8bd1-4c45-bebd-f473293c581c/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:22:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:22,630 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:22:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:22:23,272 - agent.ComputerAgent - INFO - Computer: click({'x': 833, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 833, 'y': 385})\n",
+ " 66%|██████████████████████████--------------| 4838/7340 [176:05<91:03, 27.5 steps/min]2025-08-11 18:22:23,933 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:25,259 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4840/7340 [176:07<90:58, 27.5 steps/min]\u001b[92m18:22:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:25,941 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 73})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 73})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:26,601 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:22:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4840/7340 [176:08<90:58, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:27,300 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:22:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:27,963 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 312})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 312})\n",
+ " 66%|██████████████████████████--------------| 4841/7340 [176:09<90:56, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:29,233 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:22:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<90:53, 27.5 steps/min]2025-08-11 18:22:29,911 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:22:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4842/7340 [176:12<90:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4843/7340 [176:14<90:51, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4843/7340 [176:15<90:52, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:03, 1.95s/it]2025-08-11 18:22:34,163 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:22:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4843/7340 [176:16<90:52, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.79s/it]27.5 steps/min]2025-08-11 18:22:35,844 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:22:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.57s/it]\n",
+ "\u001b[92m18:22:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4844/7340 [176:18<90:50, 27.5 steps/min]2025-08-11 18:22:37,137 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:22:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:38,664 - agent.ComputerAgent - INFO - Computer: click({'x': 271, 'y': 380})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 271, 'y': 380})\n",
+ " 66%|██████████████████████████--------------| 4844/7340 [176:20<90:51, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4846/7340 [176:21<90:45, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f753a3d9-cbdc-4abb-b967-c004e766272f/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:40,912 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:22:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4846/7340 [176:22<90:46, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:41,574 - agent.ComputerAgent - INFO - Computer: click({'x': 844, 'y': 620})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 844, 'y': 620})\n",
+ " 66%|██████████████████████████--------------| 4846/7340 [176:23<90:46, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:22:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:43,280 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 223})\n",
+ " 66%|██████████████████████████--------------| 4848/7340 [176:25<90:40, 27.5 steps/min]\u001b[92m18:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:43,909 - agent.ComputerAgent - INFO - Computer: click({'x': 629, 'y': 261})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 629, 'y': 261})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:44,583 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4849/7340 [176:26<90:38, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:45,260 - agent.ComputerAgent - INFO - Computer: click({'x': 231, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 231, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:22:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4850/7340 [176:27<90:35, 27.5 steps/min]\u001b[92m18:22:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:22:46,563 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 478})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 478})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:22:47,251 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4852/7340 [176:29<90:29, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.25s/it]2025-08-11 18:22:48,627 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:22:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4853/7340 [176:30<90:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:49,672 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4853/7340 [176:31<90:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:04, 2.09s/it]2025-08-11 18:22:51,550 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4854/7340 [176:33<90:25, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:02, 2.02s/it]2025-08-11 18:22:52,722 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:22:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.68s/it]27.5 steps/min]\n",
+ "2025-08-11 18:22:53,380 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:22:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:22:54,197 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:22:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4854/7340 [176:35<90:26, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:55,069 - agent.ComputerAgent - INFO - Computer: click({'x': 307, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 307, 'y': 426})\n",
+ "\u001b[92m18:22:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:55,725 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': -678, 'x': 318, 'y': 327})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': -678, 'x': 318, 'y': 327})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 66%|██████████████████████████--------------| 4855/7340 [176:37<90:24, 27.5 steps/min]2025-08-11 18:22:56,362 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:22:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:22:57,012 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:22:57,682 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4857/7340 [176:39<90:18, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:22:58,352 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:22:58,989 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 503})\n",
+ "2025-08-11 18:22:59,662 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "\u001b[92m18:22:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4857/7340 [176:42<90:20, 27.5 steps/min]\u001b[92m18:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:00,989 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 578})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 578})\n",
+ "\u001b[92m18:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:01,638 - agent.ComputerAgent - INFO - Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4859/7340 [176:43<90:14, 27.5 steps/min]2025-08-11 18:23:02,262 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:23:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4861/7340 [176:44<90:07, 27.5 steps/min]\u001b[92m18:23:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:03,827 - agent.ComputerAgent - INFO - Computer: click({'x': 565, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 565, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4861/7340 [176:46<90:08, 27.5 steps/min]\u001b[92m18:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 66%|██████████████████████████--------------| 4863/7340 [176:47<90:02, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:06,653 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:23:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4863/7340 [176:48<90:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:07,365 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 252, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 252, 'y': 324})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:08,031 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:23:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4863/7340 [176:49<90:04, 27.5 steps/min]2025-08-11 18:23:08,699 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:23:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:23:09,381 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:23:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:23:10,053 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4865/7340 [176:51<89:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:11,988 - agent.ComputerAgent - INFO - Computer: type({'text': 'Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:23:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4865/7340 [176:54<89:59, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:13,304 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4866/7340 [176:55<89:57, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:14,482 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:23:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:16,202 - agent.ComputerAgent - INFO - Computer: click({'x': 642, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 642, 'y': 125})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4866/7340 [176:57<89:58, 27.5 steps/min]\u001b[92m18:23:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:16,911 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 131})\n",
+ " 66%|██████████████████████████--------------| 4867/7340 [176:58<89:55, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:18,081 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:23:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:23:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4868/7340 [177:00<89:53, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:19,354 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 199})\n",
+ " 66%|██████████████████████████--------------| 4868/7340 [177:01<89:53, 27.5 steps/min]\u001b[92m18:23:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:21,030 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 426})\n",
+ " 66%|██████████████████████████--------------| 4870/7340 [177:03<89:48, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4870/7340 [177:04<89:48, 27.5 steps/min]2025-08-11 18:23:23,732 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:23:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:24,373 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:23:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4870/7340 [177:06<89:49, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:23:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4871/7340 [177:08<89:47, 27.5 steps/min]2025-08-11 18:23:27,043 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:27,683 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4871/7340 [177:09<89:47, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:28,378 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/83c40b56-f0bf-4b3a-97a5-8a1ae567e0a1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:30,360 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 66%|██████████████████████████--------------| 4871/7340 [177:12<89:49, 27.5 steps/min]\u001b[92m18:23:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:31,030 - agent.ComputerAgent - INFO - Computer: click({'x': 385, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 385, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4873/7340 [177:13<89:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:23:32,298 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26d5566b-d949-4b71-accb-45197078f693/close \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:16<89:37, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:17<89:38, 27.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.5 steps/min]\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:19<89:39, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:38,253 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:23:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:38,933 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:23:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:20<89:40, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:23:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:23:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:23:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:22<89:41, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:23:40,982 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 77})\n",
+ "\u001b[92m18:23:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:41,668 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': -673, 'x': 318, 'y': 116})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': -673, 'x': 318, 'y': 116})\n",
+ "2025-08-11 18:23:42,273 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:23:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4875/7340 [177:24<89:42, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:23:44,102 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.79s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.74s/it]2025-08-11 18:23:46,329 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+L'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+L'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.49s/it]27.5 steps/min]\n",
+ "2025-08-11 18:23:47,033 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:23:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:23:47,792 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:23:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 66%|██████████████████████████--------------| 4877/7340 [177:30<89:38, 27.5 steps/min]\u001b[92m18:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:49,511 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 315})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 315})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:50,835 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:23:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4877/7340 [177:32<89:39, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:52,016 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 477})\n",
+ " 66%|██████████████████████████--------------| 4879/7340 [177:35<89:34, 27.5 steps/min]\u001b[92m18:23:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:54,695 - agent.ComputerAgent - INFO - Computer: click({'x': 677, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 677, 'y': 199})\n",
+ " 66%|██████████████████████████--------------| 4879/7340 [177:36<89:35, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/01670e8a-9251-451a-92ad-d842f073c97a/reset \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4880/7340 [177:37<89:32, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:57,384 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:23:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:23:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4880/7340 [177:39<89:33, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:23:58,083 - agent.ComputerAgent - INFO - Computer: double_click({'x': 323, 'y': 88})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 323, 'y': 88})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 66%|██████████████████████████--------------| 4880/7340 [177:40<89:33, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:23:59,735 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 66%|██████████████████████████--------------| 4881/7340 [177:41<89:31, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:00,412 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 425})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:24:01,059 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 66%|██████████████████████████--------------| 4881/7340 [177:42<89:31, 27.5 steps/min]2025-08-11 18:24:01,755 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f096381e-eb5b-49dc-8943-c821405cce10/close \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4882/7340 [177:44<89:29, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:24:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:03,751 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4882/7340 [177:46<89:30, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:24:05,091 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:24:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:24:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4883/7340 [177:47<89:27, 27.5 steps/min]\u001b[92m18:24:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.11s/it]2025-08-11 18:24:07,032 - agent.ComputerAgent - INFO - Computer: click({'x': 415, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 415, 'y': 268})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4883/7340 [177:48<89:28, 27.5 steps/min]2025-08-11 18:24:07,678 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:24:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.79s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4884/7340 [177:50<89:25, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.68s/it]\u001b[92m18:24:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:10,325 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:24:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4884/7340 [177:52<89:26, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.24s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.45s/it]\n",
+ "\u001b[92m18:24:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4884/7340 [177:53<89:27, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4884/7340 [177:54<89:27, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:13,251 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 110})\n",
+ "\u001b[92m18:24:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:14,496 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 18:24:15,163 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:24:15,851 - agent.ComputerAgent - INFO - Computer: click({'x': 676, 'y': 199})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 676, 'y': 199})\n",
+ " 67%|██████████████████████████--------------| 4884/7340 [177:57<89:29, 27.4 steps/min]\u001b[92m18:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:16,507 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:24:16,508 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 629})\n",
+ "2025-08-11 18:24:17,190 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 362})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4887/7340 [177:59<89:20, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:24:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:19,030 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b73a5c2e-abf5-497b-9501-96d518c8b954/close \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:01<89:11, 27.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:03<89:12, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:24:23,516 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:24:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:05<89:13, 27.5 steps/min]2025-08-11 18:24:24,145 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:24:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:24:24,863 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:24:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:06<89:14, 27.5 steps/min]2025-08-11 18:24:25,866 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:24:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]\u001b[92m18:24:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:08<89:15, 27.5 steps/min]2025-08-11 18:24:27,204 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:24:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4890/7340 [178:09<89:15, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]\u001b[92m18:24:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]27.4 steps/min]\n",
+ " 67%|██████████████████████████--------------| 4895/7340 [178:12<89:00, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ad87d89c-437d-4ed4-b0f0-a157e7d11bbd/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4895/7340 [178:13<89:01, 27.5 steps/min]\u001b[92m18:24:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:32,390 - agent.ComputerAgent - INFO - Computer: click({'x': 118, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 118, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:33,061 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 426})\n",
+ " 67%|██████████████████████████--------------| 4895/7340 [178:14<89:01, 27.5 steps/min]\u001b[92m18:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:33,715 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 40})\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:17<88:53, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:18<88:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:24:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:19<88:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]2025-08-11 18:24:39,495 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:24:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:21<88:55, 27.5 steps/min]2025-08-11 18:24:40,417 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:24:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:22<88:55, 27.5 steps/min]2025-08-11 18:24:41,113 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:24:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.63s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m18:24:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:24<88:56, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:25<88:57, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:24:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:45,195 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 73, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 73, 'y': 245})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:24:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4898/7340 [178:27<88:58, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:46,438 - agent.ComputerAgent - INFO - Computer: click({'x': 529, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 529, 'y': 101})\n",
+ "\u001b[92m18:24:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:47,102 - agent.ComputerAgent - INFO - Computer: double_click({'x': 381, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 381, 'y': 278})\n",
+ " 67%|██████████████████████████--------------| 4899/7340 [178:28<88:55, 27.4 steps/min]\u001b[92m18:24:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:47,746 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 112})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 112})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4901/7340 [178:30<88:49, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:49,583 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4902/7340 [178:32<88:47, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:51,585 - agent.ComputerAgent - INFO - Computer: type({'text': 'Bing'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Bing'})\n",
+ " 67%|██████████████████████████--------------| 4903/7340 [178:33<88:45, 27.5 steps/min]\u001b[92m18:24:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:52,251 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 503})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:24:53,264 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4904/7340 [178:35<88:42, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:54,624 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:24:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:55,314 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 425})\n",
+ " 67%|██████████████████████████--------------| 4905/7340 [178:37<88:40, 27.5 steps/min]2025-08-11 18:24:55,947 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:24:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:24:56,644 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:24:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4906/7340 [178:39<88:38, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:24:57,933 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:24:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:24:58,623 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 188})\n",
+ " 67%|██████████████████████████--------------| 4906/7340 [178:40<88:38, 27.5 steps/min]2025-08-11 18:24:59,285 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:24:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4907/7340 [178:41<88:36, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:00,653 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:25:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:01,312 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7f365dff-cd43-450e-aa25-70afb55acec3/reset \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4908/7340 [178:45<88:34, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:03,951 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:25:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4908/7340 [178:46<88:35, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:05,248 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:05,942 - agent.ComputerAgent - INFO - Computer: double_click({'x': 453, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 453, 'y': 278})\n",
+ " 67%|██████████████████████████--------------| 4908/7340 [178:47<88:35, 27.5 steps/min]2025-08-11 18:25:06,596 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:25:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4909/7340 [178:48<88:32, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:08,270 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4909/7340 [178:50<88:33, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4909/7340 [178:52<88:34, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:12,279 - agent.ComputerAgent - INFO - Computer: type({'text': 'Etsy'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Etsy'})\n",
+ "\u001b[92m18:25:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4909/7340 [178:54<88:35, 27.4 steps/min]2025-08-11 18:25:12,949 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:25:12,950 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 761})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 761})\n",
+ "2025-08-11 18:25:13,605 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:25:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4910/7340 [178:55<88:33, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4911/7340 [178:56<88:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:16,085 - agent.ComputerAgent - INFO - Computer: type({'text': 'python3 main.py\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'python3 main.py\\n'})\n",
+ " 67%|██████████████████████████--------------| 4911/7340 [178:57<88:30, 27.4 steps/min]\u001b[92m18:25:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:16,752 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 524})\n",
+ " 67%|██████████████████████████--------------| 4913/7340 [178:59<88:25, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:19,923 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:25:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4913/7340 [179:01<88:26, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:20,575 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4913/7340 [179:02<88:26, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:21,906 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:25:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:23,223 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+A'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+A'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:23,882 - agent.ComputerAgent - INFO - Computer: click({'x': 117, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 117, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4914/7340 [179:05<88:25, 27.4 steps/min]2025-08-11 18:25:24,535 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:25:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:25:25,166 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4915/7340 [179:06<88:22, 27.4 steps/min]2025-08-11 18:25:25,835 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4915/7340 [179:07<88:22, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:27,508 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:25:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4915/7340 [179:09<88:23, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4915/7340 [179:10<88:24, 27.4 steps/min]\u001b[92m18:25:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:29,355 - agent.ComputerAgent - INFO - Computer: click({'x': 457, 'y': 461})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 457, 'y': 461})\n",
+ " 67%|██████████████████████████--------------| 4915/7340 [179:11<88:24, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:30,499 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:25:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4916/7340 [179:12<88:22, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:32,470 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 93})\n",
+ " 67%|██████████████████████████--------------| 4916/7340 [179:14<88:22, 27.4 steps/min]\u001b[92m18:25:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:33,151 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 40})\n",
+ " 67%|██████████████████████████--------------| 4917/7340 [179:15<88:19, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4918/7340 [179:16<88:17, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:35,419 - agent.ComputerAgent - INFO - Computer: double_click({'x': 512, 'y': 272})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 512, 'y': 272})\n",
+ " 67%|██████████████████████████--------------| 4918/7340 [179:17<88:17, 27.4 steps/min]2025-08-11 18:25:36,030 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:25:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4919/7340 [179:18<88:15, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:25:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:37,868 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 36})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 36})\n",
+ " 67%|██████████████████████████--------------| 4920/7340 [179:20<88:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4920/7340 [179:21<88:13, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:41,301 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "\u001b[92m18:25:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4920/7340 [179:23<88:14, 27.4 steps/min]2025-08-11 18:25:41,957 - agent.ComputerAgent - INFO - Computer: double_click({'x': 158, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 158, 'y': 130})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:42,626 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:25:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:43,989 - agent.ComputerAgent - INFO - Agent: I’ve opened Chrome’s Google Password Manager (chrome://password-manager/passwords) and searched for “Etsy.” No saved entries are currently listed, so there isn’t an Etsy login saved in this browser profile.\n",
+ "\n",
+ "If you expected to see it:\n",
+ "- Make sure you’re in the correct Chrome profile and signed in with sync for Passwords enabled.\n",
+ "- If you use another browser or profile, check its password manager instead.\n",
+ "\n",
+ "If an Etsy entry appears later, you can click it to view the username and site while the password stays hidden (only revealed if you click the eye icon).\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I’ve opened Chrome’s Google Password Manager (chrome://password-manager/passwords) and searched for “Etsy.” No saved entries are currently listed, so there isn’t an Etsy login saved in this browser profile.\n",
+ "\n",
+ "If you expected to see it:\n",
+ "- Make sure you’re in the correct Chrome profile and signed in with sync for Passwords enabled.\n",
+ "- If you use another browser or profile, check its password manager instead.\n",
+ "\n",
+ "If an Etsy entry appears later, you can click it to view the username and site while the password stays hidden (only revealed if you click the eye icon).\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 18:25:44,627 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1025\n",
+ " - prompt_tokens: 7396\n",
+ " - total_tokens: 8421\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 896\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0195\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1025\n",
+ " - prompt_tokens: 7396\n",
+ " - total_tokens: 8421\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 896\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0195\n",
+ " 67%|██████████████████████████--------------| 4922/7340 [179:26<88:09, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:46,567 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:25:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4923/7340 [179:28<88:06, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:25:47,604 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4923/7340 [179:29<88:07, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:25:48,294 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 203})\n",
+ "2025-08-11 18:25:48,924 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 527})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4925/7340 [179:32<88:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55f73a3-1816-4f61-8ec1-88f743cec333/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:33<87:59, 27.4 steps/min]2025-08-11 18:25:52,727 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:25:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:53,395 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:35<88:00, 27.4 steps/min]2025-08-11 18:25:54,064 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:25:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:25:54,737 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:25:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:36<88:01, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:37<88:01, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:25:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:38<88:02, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4926/7340 [179:39<88:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:25:59,163 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<88:03, 27.4 steps/min]2025-08-11 18:26:00,333 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:26:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4938/7340 [179:43<87:25, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/58eb5f3b-e072-4e49-b55c-b7c5400ab5dd/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]27.5 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:26:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4938/7340 [179:47<87:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4938/7340 [179:50<87:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.20s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4938/7340 [179:54<87:30, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:26:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:26:13,281 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 10})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 67%|██████████████████████████--------------| 4938/7340 [179:55<87:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [179:56<87:28, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [179:57<87:29, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [179:58<87:29, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [179:59<87:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:19,461 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:26:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:01<87:30, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:04<87:32, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:09<87:34, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:11<87:35, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/reset \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:15<87:37, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:34,972 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:26:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:16<87:38, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:19<87:39, 27.4 steps/min]\u001b[92m18:26:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:26:38,709 - agent.ComputerAgent - INFO - Computer: click({'x': 307, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 307, 'y': 426})\n",
+ " 67%|██████████████████████████--------------| 4939/7340 [180:20<87:40, 27.4 steps/min]\u001b[92m18:26:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:39,888 - agent.ComputerAgent - INFO - Computer: click({'x': 686, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 686, 'y': 41})\n",
+ " 67%|██████████████████████████--------------| 4941/7340 [180:22<87:34, 27.4 steps/min]\u001b[92m18:26:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:41,581 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ " 67%|██████████████████████████--------------| 4941/7340 [180:23<87:35, 27.4 steps/min]\u001b[92m18:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:43,284 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 219})\n",
+ " 67%|██████████████████████████--------------| 4943/7340 [180:26<87:29, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:44,970 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 250})\n",
+ "2025-08-11 18:26:45,604 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:26:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4943/7340 [180:27<87:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:26:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:26:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:26:46,969 - agent.ComputerAgent - INFO - Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 668, 'y': 456})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4944/7340 [180:28<87:27, 27.4 steps/min]2025-08-11 18:26:47,677 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:48,343 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:26:49,054 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 53})\n",
+ " 67%|██████████████████████████--------------| 4945/7340 [180:30<87:25, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:49,754 - agent.ComputerAgent - INFO - Computer: click({'x': 235, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 235, 'y': 150})\n",
+ " 67%|██████████████████████████--------------| 4946/7340 [180:31<87:22, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:50,908 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:26:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4947/7340 [180:32<87:20, 27.4 steps/min]\u001b[92m18:26:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:26:51,596 - agent.ComputerAgent - INFO - Computer: click({'x': 274, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 274, 'y': 53})\n",
+ " 67%|██████████████████████████--------------| 4947/7340 [180:33<87:20, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:26:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:26:53,258 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:26:53,259 - agent.ComputerAgent - INFO - Computer: click({'x': 679, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 679, 'y': 203})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f2556c8e-dab9-4e3b-a05f-de09c175b204/close \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4950/7340 [180:34<87:11, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:54,606 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:26:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:36<87:08, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:55,753 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:26:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:37<87:09, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:56,914 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:26:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:38<87:09, 27.4 steps/min]2025-08-11 18:26:57,583 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:26:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:39<87:10, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:26:59,262 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:26:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:41<87:11, 27.4 steps/min]2025-08-11 18:26:59,932 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:26:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:42<87:11, 27.4 steps/min]2025-08-11 18:27:00,622 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:27:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:43<87:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:02,491 - agent.ComputerAgent - INFO - Computer: type({'text': 'Anonym\\nXYZ Lab'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Anonym\\nXYZ Lab'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4951/7340 [180:44<87:12, 27.4 steps/min]\u001b[92m18:27:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4952/7340 [180:46<87:10, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 18:27:07,040 - agent.ComputerAgent - INFO - Computer: type({'text': '.odp'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '.odp'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]2025-08-11 18:27:08,349 - agent.ComputerAgent - INFO - Computer: type({'text': 'Dota 2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Dota 2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ " 67%|██████████████████████████--------------| 4952/7340 [180:50<87:12, 27.4 steps/min]\u001b[92m18:27:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:27:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 67%|██████████████████████████--------------| 4954/7340 [180:52<87:06, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:11,202 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:11,909 - agent.ComputerAgent - INFO - Computer: move({'x': 207, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 207, 'y': 503})\n",
+ "\u001b[92m18:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/reset \"HTTP/1.1 200 OK\"\n",
+ " 67%|██████████████████████████--------------| 4954/7340 [180:53<87:07, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:12,568 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:27:12,569 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 651})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 651})\n",
+ "2025-08-11 18:27:13,227 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 53})\n",
+ "\u001b[92m18:27:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4955/7340 [180:54<87:04, 27.4 steps/min]2025-08-11 18:27:13,914 - agent.ComputerAgent - INFO - Computer: click({'x': 526, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 526, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [180:57<86:56, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [180:58<86:56, 27.4 steps/min]2025-08-11 18:27:16,619 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:27:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0d923fcd-4666-4869-8ad2-17460c904167/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [180:59<86:57, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:18,343 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [181:00<86:57, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:19,054 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:27:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:19,752 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:27:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:20,423 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:21,103 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:27:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [181:02<86:58, 27.4 steps/min]2025-08-11 18:27:22,138 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:22,850 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:23,489 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:27:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [181:05<87:00, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [181:06<87:00, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:25,909 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 207})\n",
+ " 68%|███████████████████████████-------------| 4958/7340 [181:07<87:01, 27.4 steps/min]\u001b[92m18:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:26,583 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:27:26,585 - agent.ComputerAgent - INFO - Computer: click({'x': 215, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 215, 'y': 53})\n",
+ " 68%|███████████████████████████-------------| 4960/7340 [181:09<86:55, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4960/7340 [181:11<86:56, 27.4 steps/min]\u001b[92m18:27:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:31,281 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:27:31,282 - agent.ComputerAgent - INFO - Agent: Use the time value as hours by converting it from “fraction of a day.”\n",
+ "\n",
+ "Explanation: In spreadsheets, a time like 02:00 is stored as 2/24 (0.0833). To multiply by an hourly rate, convert the time to decimal hours by multiplying by 24.\n",
+ "\n",
+ "Formula:\n",
+ "- If Total Hours is in A2 (formatted as time) and Hourly Rate is in B2 (a number), use:\n",
+ "= B2 * (A2 * 24)\n",
+ "\n",
+ "Tips:\n",
+ "- Format the result cell as Currency.\n",
+ "- If your total-hours cell can exceed 24 hours, format that cell as [h]:mm so it displays correctly, but still multiply by 24 in the formula.\n",
+ "- If your hours are already in decimal form (e.g., 37.5), don’t multiply by 24; just use =B2*A2.\n",
+ "\n",
+ "Example:\n",
+ "- A2 = 08:30 (8 hours 30 minutes), B2 = 20\n",
+ "- Pay = 20 * (8:30 * 24) = 20 * 8.5 = 170\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Use the time value as hours by converting it from “fraction of a day.”\n",
+ "\n",
+ "Explanation: In spreadsheets, a time like 02:00 is stored as 2/24 (0.0833). To multiply by an hourly rate, convert the time to decimal hours by multiplying by 24.\n",
+ "\n",
+ "Formula:\n",
+ "- If Total Hours is in A2 (formatted as time) and Hourly Rate is in B2 (a number), use:\n",
+ "= B2 * (A2 * 24)\n",
+ "\n",
+ "Tips:\n",
+ "- Format the result cell as Currency.\n",
+ "- If your total-hours cell can exceed 24 hours, format that cell as [h]:mm so it displays correctly, but still multiply by 24 in the formula.\n",
+ "- If your hours are already in decimal form (e.g., 37.5), don’t multiply by 24; just use =B2*A2.\n",
+ "\n",
+ "Example:\n",
+ "- A2 = 08:30 (8 hours 30 minutes), B2 = 20\n",
+ "- Pay = 20 * (8:30 * 24) = 20 * 8.5 = 170\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 18:27:31,944 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1391\n",
+ " - prompt_tokens: 2002\n",
+ " - total_tokens: 3393\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0164\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1391\n",
+ " - prompt_tokens: 2002\n",
+ " - total_tokens: 3393\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1152\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0164\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:27:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4961/7340 [181:15<86:55, 27.4 steps/min]\u001b[92m18:27:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:34,624 - agent.ComputerAgent - INFO - Computer: click({'x': 309, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 309, 'y': 426})\n",
+ "\u001b[92m18:27:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:35,934 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:27:35,935 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:36,626 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 485})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 485})\n",
+ "\u001b[92m18:27:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4961/7340 [181:18<86:56, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:27:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:27:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:37,277 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 151})\n",
+ "2025-08-11 18:27:37,908 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:27:37,908 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 987, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 987, 'y': 148})\n",
+ "2025-08-11 18:27:38,546 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:27:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:39,176 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:39,865 - agent.ComputerAgent - INFO - Computer: click({'x': 670, 'y': 226})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 670, 'y': 226})\n",
+ " 68%|███████████████████████████-------------| 4965/7340 [181:21<86:45, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 68%|███████████████████████████-------------| 4967/7340 [181:22<86:39, 27.4 steps/min]\u001b[92m18:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:41,677 - agent.ComputerAgent - INFO - Computer: double_click({'x': 229, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 229, 'y': 131})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:27:42,333 - agent.ComputerAgent - INFO - LLM processing started with 7 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 7 messages\n",
+ "\u001b[92m18:27:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4967/7340 [181:24<86:39, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4968/7340 [181:25<86:37, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4968/7340 [181:26<86:37, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:45,562 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4969/7340 [181:27<86:35, 27.4 steps/min]\u001b[92m18:27:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:27:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:47,565 - agent.ComputerAgent - INFO - LLM processing started with 9 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 9 messages\n",
+ "\u001b[92m18:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4988/7340 [181:29<85:34, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:48,252 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 531})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 531})\n",
+ "\u001b[92m18:27:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:49,593 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:27:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:50,245 - agent.ComputerAgent - INFO - Computer: move({'x': 207, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 207, 'y': 525})\n",
+ " 68%|███████████████████████████-------------| 4988/7340 [181:31<85:35, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:27:50,923 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:27:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:27:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:51,615 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 219})\n",
+ "2025-08-11 18:27:52,253 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4990/7340 [181:34<85:30, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:27:52,925 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/03aa35c2-85a2-415c-9f5c-8881215ff6ba/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:54,255 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:27:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4991/7340 [181:36<85:28, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4991/7340 [181:38<85:29, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:27:57,473 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4992/7340 [181:39<85:26, 27.5 steps/min]2025-08-11 18:27:58,163 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:27:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:27:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:27:59,463 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:00,137 - agent.ComputerAgent - INFO - LLM processing started with 11 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 11 messages\n",
+ " 68%|███████████████████████████-------------| 4992/7340 [181:41<85:27, 27.5 steps/min]\u001b[92m18:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/reset \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 4992/7340 [181:43<85:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:04,775 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 4993/7340 [181:46<85:26, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 18:28:05,464 - agent.ComputerAgent - INFO - LLM processing started with 13 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 13 messages\n",
+ "\u001b[92m18:28:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ " 68%|███████████████████████████-------------| 4993/7340 [181:48<85:27, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4994/7340 [181:49<85:24, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:28:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:08,587 - agent.ComputerAgent - INFO - Computer: click({'x': 428, 'y': 538})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 428, 'y': 538})\n",
+ "\u001b[92m18:28:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:09,237 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m18:28:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:09,893 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 623, 'x': 399, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 623, 'x': 399, 'y': 425})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4994/7340 [181:52<85:26, 27.5 steps/min]\u001b[92m18:28:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:11,252 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 207})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 68%|███████████████████████████-------------| 4996/7340 [181:53<85:20, 27.5 steps/min]\u001b[92m18:28:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:12,837 - agent.ComputerAgent - INFO - Computer: click({'x': 457, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 457, 'y': 457})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:28:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 4998/7340 [181:55<85:14, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:28:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:14,707 - agent.ComputerAgent - INFO - Computer: click({'x': 654, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 654, 'y': 564})\n",
+ " 68%|███████████████████████████-------------| 4999/7340 [181:56<85:12, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:15,356 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:28:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5000/7340 [181:57<85:09, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:17,177 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5000/7340 [181:58<85:10, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:17,868 - agent.ComputerAgent - INFO - Computer: click({'x': 532, 'y': 660})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 532, 'y': 660})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5001/7340 [182:00<85:07, 27.5 steps/min]\u001b[92m18:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:19,904 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:20,575 - agent.ComputerAgent - INFO - Computer: move({'x': 402, 'y': 525})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 402, 'y': 525})\n",
+ "2025-08-11 18:28:21,225 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5002/7340 [182:03<85:05, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:22,498 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:28:22,499 - agent.ComputerAgent - INFO - Computer: click({'x': 157, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 157, 'y': 62})\n",
+ " 68%|███████████████████████████-------------| 5003/7340 [182:04<85:02, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:23,183 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:23,924 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 334})\n",
+ " 68%|███████████████████████████-------------| 5004/7340 [182:05<85:00, 27.5 steps/min]2025-08-11 18:28:24,574 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:28:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:25,205 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:28:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/reset \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5005/7340 [182:09<84:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 68%|███████████████████████████-------------| 5006/7340 [182:10<84:56, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:29,624 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:28:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5006/7340 [182:11<84:56, 27.5 steps/min]2025-08-11 18:28:30,308 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 539})\n",
+ "2025-08-11 18:28:30,974 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:28:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5006/7340 [182:12<84:57, 27.5 steps/min]2025-08-11 18:28:31,654 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:28:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:32,319 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5007/7340 [182:14<84:54, 27.5 steps/min]2025-08-11 18:28:32,975 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:33,635 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 68%|███████████████████████████-------------| 5008/7340 [182:16<84:52, 27.5 steps/min]\u001b[92m18:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:35,689 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 150})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:37,025 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5008/7340 [182:18<84:53, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:37,694 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:38,345 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:39,006 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 79, 'y': 133}, {'x': 343, 'y': 158}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 79, 'y': 133}, {'x': 343, 'y': 158}]})\n",
+ " 68%|███████████████████████████-------------| 5010/7340 [182:21<84:48, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:40,655 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5010/7340 [182:24<84:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:43,949 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:45,308 - agent.ComputerAgent - INFO - Computer: type({'text': 'Anonym\\nXYZ Lab'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Anonym\\nXYZ Lab'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5011/7340 [182:27<84:47, 27.5 steps/min]2025-08-11 18:28:45,948 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:28:46,644 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:28:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5012/7340 [182:29<84:45, 27.5 steps/min]\u001b[92m18:28:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:28:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:28:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:50,651 - agent.ComputerAgent - INFO - Computer: type({'text': 'tomorrow flights status from New York to Columbus Ohio'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'tomorrow flights status from New York to Columbus Ohio'})\n",
+ " 68%|███████████████████████████-------------| 5012/7340 [182:32<84:47, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:51,300 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 522})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 522})\n",
+ "\u001b[92m18:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:28:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:51,991 - agent.ComputerAgent - INFO - Computer: double_click({'x': 428, 'y': 539})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 428, 'y': 539})\n",
+ "2025-08-11 18:28:52,629 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:28:52,630 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 385})\n",
+ "\u001b[92m18:28:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5013/7340 [182:34<84:44, 27.5 steps/min]2025-08-11 18:28:53,285 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 630, 'scroll_x': 0, 'x': 573, 'y': 478})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 630, 'scroll_x': 0, 'x': 573, 'y': 478})\n",
+ "2025-08-11 18:28:53,948 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:28:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5016/7340 [182:35<84:36, 27.5 steps/min]2025-08-11 18:28:55,124 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5017/7340 [182:36<84:33, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 68%|███████████████████████████-------------| 5018/7340 [182:37<84:30, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:28:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5018/7340 [182:39<84:31, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:28:57,984 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:28:58,664 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5018/7340 [182:40<84:31, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:28:59,325 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:00,018 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 166}, {'x': 169, 'y': 165}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 166}, {'x': 169, 'y': 165}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 68%|███████████████████████████-------------| 5019/7340 [182:41<84:29, 27.5 steps/min]2025-08-11 18:29:00,715 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:29:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:01,396 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:29:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:03,442 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5020/7340 [182:45<84:27, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:04,094 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:04,782 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:05,479 - agent.ComputerAgent - INFO - Computer: click({'x': 592, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 592, 'y': 100})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 68%|███████████████████████████-------------| 5020/7340 [182:47<84:28, 27.5 steps/min]2025-08-11 18:29:06,136 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:06,776 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5021/7340 [182:48<84:25, 27.5 steps/min]2025-08-11 18:29:07,462 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:29:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5021/7340 [182:49<84:26, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:09,164 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:29:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 68%|███████████████████████████-------------| 5021/7340 [182:50<84:27, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:10,316 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5021/7340 [182:53<84:28, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:29:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:12,850 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 1008, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 1008, 'y': 760})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:14,853 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 68%|███████████████████████████-------------| 5021/7340 [182:56<84:29, 27.4 steps/min]2025-08-11 18:29:15,498 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:29:15,499 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 143})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:16,177 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5023/7340 [182:58<84:24, 27.5 steps/min]\u001b[92m18:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:17,546 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 250})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5024/7340 [182:59<84:21, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:18,922 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 188})\n",
+ "\u001b[92m18:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:19,593 - agent.ComputerAgent - INFO - Computer: click({'x': 435, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 435, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 68%|███████████████████████████-------------| 5025/7340 [183:02<84:19, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:20,925 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:21,611 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 52})\n",
+ " 68%|███████████████████████████-------------| 5027/7340 [183:03<84:13, 27.5 steps/min]2025-08-11 18:29:22,296 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:29:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5028/7340 [183:04<84:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:23,995 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5028/7340 [183:06<84:11, 27.5 steps/min]\u001b[92m18:29:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:29:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:25,848 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 534, 'y': 647})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 655, 'scroll_x': 0, 'x': 534, 'y': 647})\n",
+ " 69%|███████████████████████████-------------| 5028/7340 [183:07<84:12, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:27,015 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5029/7340 [183:10<84:10, 27.5 steps/min]\u001b[92m18:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:29,440 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5029/7340 [183:11<84:10, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:30,098 - agent.ComputerAgent - INFO - Computer: click({'x': 506, 'y': 723})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 506, 'y': 723})\n",
+ "2025-08-11 18:29:30,786 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:29:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:31,448 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:29:31,448 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 477})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:32,831 - agent.ComputerAgent - INFO - Computer: type({'text': '140'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '140'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:34,196 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5029/7340 [183:16<84:13, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:35,518 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:36,879 - agent.ComputerAgent - INFO - Computer: click({'x': 419, 'y': 473})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 419, 'y': 473})\n",
+ " 69%|███████████████████████████-------------| 5033/7340 [183:18<84:01, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5034/7340 [183:20<83:59, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:39,204 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 658})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 658})\n",
+ "\u001b[92m18:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:39,855 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 221})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 221})\n",
+ "\u001b[92m18:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5034/7340 [183:21<83:59, 27.5 steps/min]2025-08-11 18:29:40,537 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 140})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 140})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:41,322 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:29:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:41,996 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:29:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:42,658 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:29:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5036/7340 [183:24<83:54, 27.5 steps/min]2025-08-11 18:29:44,349 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:29:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:46,300 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 69%|███████████████████████████-------------| 5037/7340 [183:28<83:53, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:48,054 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 69%|███████████████████████████-------------| 5037/7340 [183:29<83:53, 27.5 steps/min]\u001b[92m18:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5037/7340 [183:31<83:54, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:50,150 - agent.ComputerAgent - INFO - Computer: click({'x': 140, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 140, 'y': 430})\n",
+ "\u001b[92m18:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:50,819 - agent.ComputerAgent - INFO - Computer: click({'x': 709, 'y': 109})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 709, 'y': 109})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5037/7340 [183:32<83:55, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:29:51,483 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:52,144 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:29:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:52,803 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 69%|███████████████████████████-------------| 5039/7340 [183:34<83:49, 27.4 steps/min]\u001b[92m18:29:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:29:53,472 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:29:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5039/7340 [183:35<83:50, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:29:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cfe4e097-0434-4025-a00a-78e26d753e51/close \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5039/7340 [183:37<83:50, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:29:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:29:56,606 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 437})\n",
+ "\u001b[92m18:29:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5039/7340 [183:38<83:51, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:57,272 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 188})\n",
+ "2025-08-11 18:29:57,883 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:29:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5040/7340 [183:39<83:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:29:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:29:59,182 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:29:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5041/7340 [183:40<83:46, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.61s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5041/7340 [183:45<83:48, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it]2025-08-11 18:30:04,624 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:30:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5041/7340 [183:46<83:48, 27.4 steps/min]2025-08-11 18:30:05,538 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "\u001b[92m18:30:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5041/7340 [183:50<83:50, 27.4 steps/min]\u001b[92m18:30:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:30:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:09,218 - agent.ComputerAgent - INFO - Computer: click({'x': 354, 'y': 215})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 354, 'y': 215})\n",
+ "\u001b[92m18:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:09,852 - agent.ComputerAgent - INFO - Computer: click({'x': 464, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 464, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 69%|███████████████████████████-------------| 5041/7340 [183:52<83:51, 27.4 steps/min]\u001b[92m18:30:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:30:11,237 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 677, 'scroll_x': 0, 'x': 606, 'y': 401})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 677, 'scroll_x': 0, 'x': 606, 'y': 401})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:30:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:12,515 - agent.ComputerAgent - INFO - Computer: type({'text': 'arXiv daily foundation models Oct 11 2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'arXiv daily foundation models Oct 11 2023'})\n",
+ "2025-08-11 18:30:13,196 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 668, 'scroll_x': 0, 'x': 526, 'y': 463})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 668, 'scroll_x': 0, 'x': 526, 'y': 463})\n",
+ "2025-08-11 18:30:13,855 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 482})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 482})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5043/7340 [183:56<83:46, 27.4 steps/min]\u001b[92m18:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:30:15,211 - agent.ComputerAgent - INFO - Computer: click({'x': 482, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 482, 'y': 351})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:16,506 - agent.ComputerAgent - INFO - Computer: type({'text': '110'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '110'})\n",
+ "\u001b[92m18:30:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:30:17,806 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 69%|███████████████████████████-------------| 5047/7340 [183:59<83:35, 27.4 steps/min]2025-08-11 18:30:18,472 - agent.ComputerAgent - INFO - Computer: click({'x': 434, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 434, 'y': 185})\n",
+ " 69%|███████████████████████████-------------| 5050/7340 [184:03<83:27, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5050/7340 [184:04<83:28, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:23,753 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:30:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5050/7340 [184:05<83:28, 27.4 steps/min]2025-08-11 18:30:24,404 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:30:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:25,033 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:30:25,713 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:30:26,412 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:30:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5050/7340 [184:08<83:29, 27.4 steps/min]2025-08-11 18:30:27,084 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:30:27,762 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:30:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:30:29,756 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 69%|███████████████████████████-------------| 5050/7340 [184:11<83:31, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:30,778 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:30:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:30:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5051/7340 [184:12<83:28, 27.4 steps/min]2025-08-11 18:30:31,505 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:32,812 - agent.ComputerAgent - INFO - Computer: type({'text': 'Investment Summary'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Investment Summary'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5051/7340 [184:15<83:29, 27.4 steps/min]\u001b[92m18:30:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:34,193 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:30:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:30:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:30:34,890 - agent.ComputerAgent - INFO - Computer: click({'x': 170, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 170, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:36,190 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 69%|███████████████████████████-------------| 5053/7340 [184:17<83:24, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:37,375 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5057/7340 [184:19<83:12, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2bf9cd89-2d6a-4856-a09d-a771bc278600/close \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5057/7340 [184:20<83:13, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:39,753 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:30:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:41,033 - agent.ComputerAgent - INFO - Computer: type({'text': 'LanguageTool'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'LanguageTool'})\n",
+ " 69%|███████████████████████████-------------| 5057/7340 [184:22<83:14, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:41,704 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:30:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:30:42,383 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:30:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:30:43,032 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5058/7340 [184:24<83:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5058/7340 [184:26<83:12, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:30:46,230 - agent.ComputerAgent - INFO - Computer: type({'text': '110%'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '110%'})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<83:13, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5059/7340 [184:29<83:10, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.71s/it]\u001b[92m18:30:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5059/7340 [184:30<83:11, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 18:30:50,124 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:30:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5059/7340 [184:32<83:12, 27.4 steps/min]\u001b[92m18:30:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]27.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5059/7340 [184:34<83:13, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:53,843 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:30:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5059/7340 [184:35<83:13, 27.4 steps/min]\u001b[92m18:30:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:54,488 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 613, 'x': 591, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 613, 'x': 591, 'y': 115})\n",
+ "\u001b[92m18:30:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:55,127 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 483})\n",
+ " 69%|███████████████████████████-------------| 5060/7340 [184:36<83:11, 27.4 steps/min]\u001b[92m18:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:55,795 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 151})\n",
+ "\u001b[92m18:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:56,463 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 95})\n",
+ "\u001b[92m18:30:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5061/7340 [184:38<83:08, 27.4 steps/min]\u001b[92m18:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:30:57,794 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:30:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:30:58,467 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 601, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 601, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:30:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:30:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5063/7340 [184:41<83:03, 27.4 steps/min]\u001b[92m18:30:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:00,862 - agent.ComputerAgent - INFO - Computer: double_click({'x': 194, 'y': 236})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 194, 'y': 236})\n",
+ "2025-08-11 18:31:01,543 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 207}, {'x': 124, 'y': 179}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 207}, {'x': 124, 'y': 179}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 69%|███████████████████████████-------------| 5065/7340 [184:43<82:58, 27.4 steps/min]\u001b[92m18:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:02,880 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 645, 'scroll_x': 0, 'x': 651, 'y': 612})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 645, 'scroll_x': 0, 'x': 651, 'y': 612})\n",
+ "\u001b[92m18:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:03,496 - agent.ComputerAgent - INFO - Computer: click({'x': 784, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 784, 'y': 185})\n",
+ "\u001b[92m18:31:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5067/7340 [184:45<82:52, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:04,138 - agent.ComputerAgent - INFO - Computer: click({'x': 637, 'y': 469})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 637, 'y': 469})\n",
+ " 69%|███████████████████████████-------------| 5069/7340 [184:46<82:46, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:05,813 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5070/7340 [184:47<82:44, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5070/7340 [184:49<82:45, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:09,163 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5071/7340 [184:50<82:42, 27.4 steps/min]2025-08-11 18:31:09,837 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 251})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 251})\n",
+ "2025-08-11 18:31:10,476 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:31:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5071/7340 [184:52<82:43, 27.4 steps/min]2025-08-11 18:31:11,154 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:31:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:31:11,833 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:31:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [184:53<82:40, 27.4 steps/min]2025-08-11 18:31:12,826 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:31:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:31:13,502 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:31:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [184:55<82:41, 27.4 steps/min]2025-08-11 18:31:14,175 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:31:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:15,507 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+m'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+m'})\n",
+ "2025-08-11 18:31:16,155 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:31:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [184:57<82:42, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:31:16,843 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:31:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:31:17,494 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [184:59<82:43, 27.4 steps/min]2025-08-11 18:31:18,143 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:31:18,800 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [185:02<82:44, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/reset \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [185:04<82:45, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5072/7340 [185:05<82:46, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:25,209 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 673})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 673})\n",
+ " 69%|███████████████████████████-------------| 5073/7340 [185:06<82:43, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:25,908 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:31:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5074/7340 [185:07<82:40, 27.4 steps/min]2025-08-11 18:31:26,941 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:31:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5074/7340 [185:09<82:41, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:28,766 - agent.ComputerAgent - INFO - Computer: click({'x': 80, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 80, 'y': 209})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5075/7340 [185:10<82:38, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:29,445 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:31:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5076/7340 [185:12<82:36, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:31,549 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:31:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:32,241 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 601, 'y': 418})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 654, 'scroll_x': 0, 'x': 601, 'y': 418})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 69%|███████████████████████████-------------| 5077/7340 [185:13<82:33, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5078/7340 [185:15<82:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:33,929 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5078/7340 [185:16<82:31, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:31:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:36,137 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5078/7340 [185:17<82:32, 27.4 steps/min]2025-08-11 18:31:36,819 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:31:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5079/7340 [185:19<82:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:31:39,579 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:31:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5080/7340 [185:21<82:27, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:43,927 - agent.ComputerAgent - INFO - Computer: type({'text': 'Configure Display Language'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Configure Display Language'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5080/7340 [185:26<82:30, 27.4 steps/min]\u001b[92m18:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:45,846 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:46,514 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:31:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5081/7340 [185:28<82:27, 27.4 steps/min]\u001b[92m18:31:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:47,172 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 651, 'y': 603})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 660, 'scroll_x': 0, 'x': 651, 'y': 603})\n",
+ "\u001b[92m18:31:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:48,241 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 69%|███████████████████████████-------------| 5082/7340 [185:30<82:25, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:49,540 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5084/7340 [185:31<82:19, 27.4 steps/min]2025-08-11 18:31:50,794 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:51,429 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5084/7340 [185:33<82:20, 27.4 steps/min]\u001b[92m18:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:31:52,624 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 75, 'y': 219}, {'x': 124, 'y': 192}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 75, 'y': 219}, {'x': 124, 'y': 192}]})\n",
+ " 69%|███████████████████████████-------------| 5084/7340 [185:34<82:20, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5085/7340 [185:35<82:18, 27.4 steps/min]2025-08-11 18:31:54,319 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5085/7340 [185:36<82:18, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5086/7340 [185:37<82:16, 27.4 steps/min]\u001b[92m18:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:31:56,855 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 53})\n",
+ "\u001b[92m18:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:57,518 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:31:57,519 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 385})\n",
+ " 69%|███████████████████████████-------------| 5086/7340 [185:39<82:16, 27.4 steps/min]2025-08-11 18:31:58,165 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:31:58,853 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:31:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:31:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5088/7340 [185:41<82:11, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:00,164 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5088/7340 [185:43<82:12, 27.4 steps/min]\u001b[92m18:32:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:02,140 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 526, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 646, 'scroll_x': 0, 'x': 526, 'y': 420})\n",
+ "\u001b[92m18:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:02,805 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:32:03,494 - agent.ComputerAgent - INFO - Computer: double_click({'x': 732, 'y': 301})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 732, 'y': 301})\n",
+ "\u001b[92m18:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5088/7340 [185:45<82:13, 27.4 steps/min]2025-08-11 18:32:04,164 - agent.ComputerAgent - INFO - Computer: click({'x': 676, 'y': 99})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 676, 'y': 99})\n",
+ "2025-08-11 18:32:04,844 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:32:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 69%|███████████████████████████-------------| 5092/7340 [185:47<82:01, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:06,534 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:32:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5092/7340 [185:49<82:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:08,724 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:32:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5092/7340 [185:50<82:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:10,629 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 69%|███████████████████████████-------------| 5093/7340 [185:53<82:00, 27.4 steps/min]\u001b[92m18:32:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:11,965 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:32:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:32:12,635 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5093/7340 [185:54<82:01, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:13,690 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5093/7340 [185:55<82:01, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:14,374 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:32:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:32:15,049 - agent.ComputerAgent - INFO - Computer: click({'x': 156, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 156, 'y': 52})\n",
+ " 69%|███████████████████████████-------------| 5093/7340 [185:56<82:02, 27.4 steps/min]2025-08-11 18:32:16,233 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:32:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5094/7340 [185:58<81:59, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 69%|███████████████████████████-------------| 5095/7340 [185:59<81:56, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:17,875 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:32:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5095/7340 [186:00<81:57, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 69%|███████████████████████████-------------| 5095/7340 [186:01<81:57, 27.4 steps/min]\u001b[92m18:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:20,491 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 676, 'scroll_x': 0, 'x': 651, 'y': 613})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 676, 'scroll_x': 0, 'x': 651, 'y': 613})\n",
+ " 69%|███████████████████████████-------------| 5095/7340 [186:02<81:58, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:22,175 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5097/7340 [186:03<81:52, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:22,845 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 69%|███████████████████████████-------------| 5097/7340 [186:05<81:53, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 69%|███████████████████████████-------------| 5098/7340 [186:07<81:51, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:27,065 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5098/7340 [186:09<81:52, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:28,937 - agent.ComputerAgent - INFO - Computer: move({'x': 112, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 112, 'y': 209})\n",
+ " 69%|███████████████████████████-------------| 5098/7340 [186:10<81:52, 27.4 steps/min]2025-08-11 18:32:29,605 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:32:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:32:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5099/7340 [186:12<81:50, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:32:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:31,498 - agent.ComputerAgent - INFO - Computer: click({'x': 199, 'y': 209})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 199, 'y': 209})\n",
+ " 69%|███████████████████████████-------------| 5099/7340 [186:13<81:50, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 69%|███████████████████████████-------------| 5101/7340 [186:14<81:44, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:32:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:35,314 - agent.ComputerAgent - INFO - Computer: move({'x': 66, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 66, 'y': 345})\n",
+ " 69%|███████████████████████████-------------| 5101/7340 [186:17<81:45, 27.4 steps/min]\u001b[92m18:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:35,985 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 243})\n",
+ "\u001b[92m18:32:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:36,620 - agent.ComputerAgent - INFO - Computer: click({'x': 179, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 179, 'y': 223})\n",
+ " 70%|███████████████████████████-------------| 5102/7340 [186:18<81:43, 27.4 steps/min]2025-08-11 18:32:37,275 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:32:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5104/7340 [186:19<81:37, 27.4 steps/min]2025-08-11 18:32:38,844 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:32:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5104/7340 [186:20<81:38, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5104/7340 [186:21<81:38, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5104/7340 [186:22<81:39, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:41,672 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:42,691 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 13})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5104/7340 [186:25<81:40, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:32:44,679 - agent.ComputerAgent - INFO - Computer: type({'text': 'arabic'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'arabic'})\n",
+ "\u001b[92m18:32:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5105/7340 [186:26<81:37, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:45,369 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 656, 'scroll_x': 0, 'x': 601, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 656, 'scroll_x': 0, 'x': 601, 'y': 433})\n",
+ "2025-08-11 18:32:46,036 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5106/7340 [186:27<81:34, 27.4 steps/min]2025-08-11 18:32:46,685 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:32:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5107/7340 [186:28<81:32, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5107/7340 [186:30<81:32, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:32:49,527 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 668, 'scroll_x': 0, 'x': 651, 'y': 603})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 668, 'scroll_x': 0, 'x': 651, 'y': 603})\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:33<81:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:52,757 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:32:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:34<81:31, 27.4 steps/min]2025-08-11 18:32:53,426 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:32:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:36<81:32, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:37<81:32, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a2acb07-9b9c-48d4-9515-9b1c3b814a07/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:38<81:33, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/06215fad-881b-4e96-84a9-854f2d453fc5/close \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:39<81:33, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:32:59,035 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:41<81:34, 27.4 steps/min]\u001b[92m18:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:43<81:35, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:46<81:36, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.3 steps/min]\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:49<81:38, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:50<81:38, 27.3 steps/min]\u001b[92m18:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:10,080 - agent.ComputerAgent - INFO - Computer: click({'x': 352, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 352, 'y': 214})\n",
+ "\u001b[92m18:33:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:10,737 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 323})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 323})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:12,022 - agent.ComputerAgent - INFO - Computer: type({'text': 'terminal'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'terminal'})\n",
+ " 70%|███████████████████████████-------------| 5108/7340 [186:53<81:39, 27.3 steps/min]2025-08-11 18:33:12,709 - agent.ComputerAgent - INFO - Computer: click({'x': 759, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 759, 'y': 243})\n",
+ "\u001b[92m18:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:33:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:33:13,341 - agent.ComputerAgent - INFO - Computer: click({'x': 97, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 97, 'y': 51})\n",
+ "2025-08-11 18:33:14,023 - agent.ComputerAgent - INFO - Computer: click({'x': 286, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 286, 'y': 149})\n",
+ " 70%|███████████████████████████-------------| 5114/7340 [186:59<81:23, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:19,266 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:33:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:33:19,936 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:33:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5114/7340 [187:01<81:24, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:20,629 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:33:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:33:21,306 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:33:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:33:21,959 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5114/7340 [187:04<81:25, 27.3 steps/min]\u001b[92m18:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:24,124 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:33:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5114/7340 [187:05<81:26, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:33:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:33:24,835 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 688, 'scroll_x': 0, 'x': 501, 'y': 647})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 688, 'scroll_x': 0, 'x': 501, 'y': 647})\n",
+ " 70%|███████████████████████████-------------| 5115/7340 [187:10<81:25, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|███████████████████████████-------------| 5115/7340 [187:11<81:25, 27.3 steps/min]\u001b[92m18:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:33:31,242 - agent.ComputerAgent - INFO - Computer: double_click({'x': 354, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 354, 'y': 136})\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:13<81:23, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:33,427 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:33:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:15<81:24, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:35,669 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:17<81:25, 27.3 steps/min]2025-08-11 18:33:36,854 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:33:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:18<81:25, 27.3 steps/min]2025-08-11 18:33:37,525 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:33:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:20<81:26, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:22<81:27, 27.3 steps/min]\u001b[92m18:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|███████████████████████████-------------| 5116/7340 [187:23<81:27, 27.3 steps/min]\u001b[92m18:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:42,613 - agent.ComputerAgent - INFO - Computer: click({'x': 130, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 130, 'y': 304})\n",
+ "\u001b[92m18:33:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:33:43,269 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 167}, {'x': 125, 'y': 196}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 167}, {'x': 125, 'y': 196}]})\n",
+ " 70%|███████████████████████████-------------| 5118/7340 [187:28<81:23, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:48,130 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5118/7340 [187:29<81:24, 27.3 steps/min]2025-08-11 18:33:48,776 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:33:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:49,428 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:33:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5119/7340 [187:35<81:23, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5119/7340 [187:36<81:23, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:33:55,339 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:33:56,403 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 651, 'y': 646})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 643, 'scroll_x': 0, 'x': 651, 'y': 646})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:33:57,705 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\nls -l\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\nls -l\\n'})\n",
+ " 70%|███████████████████████████-------------| 5119/7340 [187:39<81:25, 27.3 steps/min]INFO:openai._base_client:Retrying request to /chat/completions in 0.465996 seconds\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:33:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5121/7340 [187:41<81:19, 27.3 steps/min]\u001b[92m18:33:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:34:00,738 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 151})\n",
+ " 70%|███████████████████████████-------------| 5121/7340 [187:42<81:20, 27.3 steps/min]\u001b[92m18:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:01,917 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 46, 'y': 180}, {'x': 125, 'y': 180}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 46, 'y': 180}, {'x': 125, 'y': 180}]})\n",
+ " 70%|███████████████████████████-------------| 5122/7340 [187:43<81:17, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5123/7340 [187:44<81:14, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:04,320 - agent.ComputerAgent - INFO - Computer: click({'x': 713, 'y': 751})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 713, 'y': 751})\n",
+ " 70%|███████████████████████████-------------| 5123/7340 [187:46<81:15, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:04,958 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:05,657 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:34:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5124/7340 [187:48<81:13, 27.3 steps/min]\u001b[92m18:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:34:07,673 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "\u001b[92m18:34:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5124/7340 [187:49<81:13, 27.3 steps/min]2025-08-11 18:34:08,680 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 573, 'scroll_x': 0, 'x': 12, 'y': 12})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 573, 'scroll_x': 0, 'x': 12, 'y': 12})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:09,318 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ " 70%|███████████████████████████-------------| 5124/7340 [187:51<81:14, 27.3 steps/min]\u001b[92m18:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:34:09,989 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:11,041 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:34:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5125/7340 [187:52<81:12, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5125/7340 [187:53<81:12, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a4a2a38e-bec8-46b5-b9c9-3e82144e6ff7/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:13,540 - agent.ComputerAgent - INFO - Computer: click({'x': 943, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 943, 'y': 64})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5125/7340 [187:55<81:13, 27.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5126/7340 [187:56<81:10, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c268b680-eafe-4b8d-914a-28e5540231cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:15,728 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5126/7340 [187:57<81:10, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5126/7340 [187:58<81:11, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c268b680-eafe-4b8d-914a-28e5540231cd/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5126/7340 [188:00<81:12, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c268b680-eafe-4b8d-914a-28e5540231cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]2025-08-11 18:34:20,079 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5126/7340 [188:01<81:12, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 18:34:22,137 - agent.ComputerAgent - INFO - Computer: type({'text': 'mkdir -p /tmp/lo-temp-profile && libreoffice --headless -env:UserInstallation=file:///tmp/lo-temp-profile --convert-to \"csv:Text - txt - csv (StarCalc):44,34,0\" --outdir ~/Desktop ~/Desktop/file_example_ODS_5000.ods\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'mkdir -p /tmp/lo-temp-profile && libreoffice --headless -env:UserInstallation=file:///tmp/lo-temp-profile --convert-to \"csv:Text - txt - csv (StarCalc):44,34,0\" --outdir ~/Desktop ~/Desktop/file_example_ODS_5000.ods\\n'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]27.3 steps/min]\n",
+ " 70%|███████████████████████████-------------| 5127/7340 [188:05<81:11, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|███████████████████████████-------------| 5127/7340 [188:06<81:11, 27.3 steps/min]\u001b[92m18:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:25,577 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 459})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 459})\n",
+ " 70%|███████████████████████████-------------| 5128/7340 [188:08<81:09, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:28,218 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:34:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5128/7340 [188:09<81:10, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5128/7340 [188:11<81:10, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:31,458 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5128/7340 [188:13<81:11, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5128/7340 [188:14<81:11, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:33,815 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 650, 'x': 1018, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 650, 'x': 1018, 'y': 392})\n",
+ " 70%|███████████████████████████-------------| 5129/7340 [188:15<81:09, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5129/7340 [188:16<81:09, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:34:35,609 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:36,974 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 620, 'scroll_x': 0, 'x': 653, 'y': 676})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 620, 'scroll_x': 0, 'x': 653, 'y': 676})\n",
+ " 70%|███████████████████████████-------------| 5129/7340 [188:18<81:10, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:38,151 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 81, 'y': 182}, {'x': 124, 'y': 182}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 81, 'y': 182}, {'x': 124, 'y': 182}]})\n",
+ " 70%|███████████████████████████-------------| 5131/7340 [188:20<81:05, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|███████████████████████████-------------| 5132/7340 [188:21<81:02, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:41,328 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:34:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5132/7340 [188:23<81:03, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5132/7340 [188:24<81:03, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:34:43,633 - agent.ComputerAgent - INFO - Computer: click({'x': 689, 'y': 82})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 689, 'y': 82})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5132/7340 [188:25<81:04, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:34:44,260 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5134/7340 [188:26<80:58, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:46,138 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f365dff-cd43-450e-aa25-70afb55acec3/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5134/7340 [188:27<80:58, 27.2 steps/min]2025-08-11 18:34:46,753 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5135/7340 [188:29<80:56, 27.2 steps/min]2025-08-11 18:34:48,088 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5135/7340 [188:30<80:56, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|███████████████████████████-------------| 5135/7340 [188:31<80:57, 27.2 steps/min]2025-08-11 18:34:50,784 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:34:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5136/7340 [188:33<80:54, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:52,489 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:34:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5136/7340 [188:34<80:55, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:34:53,680 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:34:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5136/7340 [188:35<80:55, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]2025-08-11 18:34:55,731 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+o'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+o'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|███████████████████████████-------------| 5137/7340 [188:37<80:53, 27.2 steps/min]2025-08-11 18:34:56,909 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|███████████████████████████-------------| 5137/7340 [188:38<80:54, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.71s/it]2025-08-11 18:34:57,954 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]27.2 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:34:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5137/7340 [188:40<80:54, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|███████████████████████████-------------| 5137/7340 [188:42<80:55, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|███████████████████████████-------------| 5137/7340 [188:43<80:55, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:35:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:35:02,188 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 460})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 460})\n",
+ "\u001b[92m18:35:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:35:02,861 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 181})\n",
+ " 70%|████████████████████████████------------| 5138/7340 [188:44<80:53, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:03,476 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5140/7340 [188:47<80:48, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:07,918 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -l ~/Desktop\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -l ~/Desktop\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5140/7340 [188:50<80:49, 27.2 steps/min]\u001b[92m18:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:35:09,210 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:35:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:35:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:35:10,235 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 690, 'scroll_x': 0, 'x': 717, 'y': 646})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 690, 'scroll_x': 0, 'x': 717, 'y': 646})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5142/7340 [188:51<80:43, 27.2 steps/min]2025-08-11 18:35:10,911 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5143/7340 [188:54<80:41, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:12,580 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5143/7340 [188:56<80:42, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5144/7340 [188:57<80:39, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:16,811 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5144/7340 [188:58<80:40, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:35:17,451 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:35:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5144/7340 [188:59<80:40, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:19,112 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5144/7340 [189:00<80:41, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:20,394 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'escape'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'escape'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5146/7340 [189:03<80:36, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:22,739 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:35:22,740 - agent.ComputerAgent - INFO - Agent: I can’t see your design app/document open on this computer, so I can’t move the text box directly. Here’s the fastest, foolproof way to get your text box flush left without ever clicking the image again:\n",
+ "\n",
+ "Universal quick steps (work in any editor)\n",
+ "1) Lock or hide the image layer.\n",
+ "- Use the Layers/Selection panel and click the lock icon on the image layer (or right‑click > Lock).\n",
+ "2) Select the text box from the Layers/Selection panel (don’t click on the canvas).\n",
+ "3) Snap it to the left edge:\n",
+ "- Either set the X position to 0.\n",
+ "- Or use the Align Left to canvas/page/artboard command.\n",
+ "4) Unlock/show the image layer again if needed.\n",
+ "\n",
+ "Exact steps by app\n",
+ "- Figma:\n",
+ " - Lock image: Ctrl+Shift+L (Cmd+Shift+L on Mac) or click the lock icon in Layers.\n",
+ " - Select the text layer in Layers.\n",
+ " - In the right panel, set X = 0. Or select the frame and the text, then click Align left. Tip: Cmd/Ctrl+Click cycles through stacked layers; Right‑click > Select layer also helps.\n",
+ "\n",
+ "- Photoshop:\n",
+ " - Lock image layer (padlock in Layers).\n",
+ " - Select text layer in Layers.\n",
+ " - Press V for Move tool. Press Ctrl+A (Cmd+A) to Select All, click Align left edges in the top options bar, then Ctrl+D (Cmd+D) to deselect. Alternatively, Window > Properties and set X = 0 px.\n",
+ "\n",
+ "- Illustrator:\n",
+ " - Lock image: Object > Lock > Selection (Ctrl+2 / Cmd+2) or lock in Layers.\n",
+ " - Select text object.\n",
+ " - Align panel: set Align To = Artboard, then Horizontal Align Left. Or Transform panel X = 0.\n",
+ "\n",
+ "- InDesign:\n",
+ " - Lock image (Ctrl+L / Cmd+L) or via Layers.\n",
+ " - Select text frame.\n",
+ " - Window > Object & Layout > Align. Set Align To = Page, then Align Left. Or set X = 0 in the Control/Properties panel.\n",
+ "\n",
+ "- Canva:\n",
+ " - Select image > Lock (padlock on toolbar).\n",
+ " - Select text > Position > Left. Nudge with arrow keys if needed; it will snap to the page edge.\n",
+ "\n",
+ "- PowerPoint:\n",
+ " - Home > Select > Selection Pane: click the eye to hide the image (or select the text from the pane).\n",
+ " - Shape Format > Align > Align to Slide > Align Left.\n",
+ "\n",
+ "- Google Slides:\n",
+ " - Arrange > Order to move im...\n",
+ "INFO:agent.ComputerAgent:Agent: I can’t see your design app/document open on this computer, so I can’t move the text box directly. Here’s the fastest, foolproof way to get your text box flush left without ever clicking the image again:\n",
+ "\n",
+ "Universal quick steps (work in any editor)\n",
+ "1) Lock or hide the image layer.\n",
+ "- Use the Layers/Selection panel and click the lock icon on the image layer (or right‑click > Lock).\n",
+ "2) Select the text box from the Layers/Selection panel (don’t click on the canvas).\n",
+ "3) Snap it to the left edge:\n",
+ "- Either set the X position to 0.\n",
+ "- Or use the Align Left to canvas/page/artboard command.\n",
+ "4) Unlock/show the image layer again if needed.\n",
+ "\n",
+ "Exact steps by app\n",
+ "- Figma:\n",
+ " - Lock image: Ctrl+Shift+L (Cmd+Shift+L on Mac) or click the lock icon in Layers.\n",
+ " - Select the text layer in Layers.\n",
+ " - In the right panel, set X = 0. Or select the frame and the text, then click Align left. Tip: Cmd/Ctrl+Click cycles through stacked layers; Right‑click > Select layer also helps.\n",
+ "\n",
+ "- Photoshop:\n",
+ " - Lock image layer (padlock in Layers).\n",
+ " - Select text layer in Layers.\n",
+ " - Press V for Move tool. Press Ctrl+A (Cmd+A) to Select All, click Align left edges in the top options bar, then Ctrl+D (Cmd+D) to deselect. Alternatively, Window > Properties and set X = 0 px.\n",
+ "\n",
+ "- Illustrator:\n",
+ " - Lock image: Object > Lock > Selection (Ctrl+2 / Cmd+2) or lock in Layers.\n",
+ " - Select text object.\n",
+ " - Align panel: set Align To = Artboard, then Horizontal Align Left. Or Transform panel X = 0.\n",
+ "\n",
+ "- InDesign:\n",
+ " - Lock image (Ctrl+L / Cmd+L) or via Layers.\n",
+ " - Select text frame.\n",
+ " - Window > Object & Layout > Align. Set Align To = Page, then Align Left. Or set X = 0 in the Control/Properties panel.\n",
+ "\n",
+ "- Canva:\n",
+ " - Select image > Lock (padlock on toolbar).\n",
+ " - Select text > Position > Left. Nudge with arrow keys if needed; it will snap to the page edge.\n",
+ "\n",
+ "- PowerPoint:\n",
+ " - Home > Select > Selection Pane: click the eye to hide the image (or select the text from the pane).\n",
+ " - Shape Format > Align > Align to Slide > Align Left.\n",
+ "\n",
+ "- Google Slides:\n",
+ " - Arrange > Order to move im...\n",
+ "2025-08-11 18:35:23,380 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 3277\n",
+ " - prompt_tokens: 1951\n",
+ " - total_tokens: 5228\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2624\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0352\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 3277\n",
+ " - prompt_tokens: 1951\n",
+ " - total_tokens: 5228\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2624\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0352\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5147/7340 [189:05<80:33, 27.2 steps/min]2025-08-11 18:35:24,059 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:35:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5147/7340 [189:06<80:34, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|████████████████████████████------------| 5147/7340 [189:07<80:34, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c268b680-eafe-4b8d-914a-28e5540231cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:35:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 70%|████████████████████████████------------| 5147/7340 [189:08<80:35, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:35:27,111 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 76, 'y': 181}, {'x': 124, 'y': 181}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 76, 'y': 181}, {'x': 124, 'y': 181}]})\n",
+ "\u001b[92m18:35:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:35:27,806 - agent.ComputerAgent - INFO - Computer: click({'x': 290, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 290, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5148/7340 [189:09<80:32, 27.2 steps/min]2025-08-11 18:35:28,462 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:35:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5150/7340 [189:10<80:26, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:29,639 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5150/7340 [189:13<80:28, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5151/7340 [189:14<80:25, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:33,840 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:35:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 70%|████████████████████████████------------| 5151/7340 [189:15<80:25, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 70%|████████████████████████████------------| 5151/7340 [189:16<80:26, 27.2 steps/min]\u001b[92m18:35:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:35:35,720 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 488})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 488})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5152/7340 [189:17<80:23, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c268b680-eafe-4b8d-914a-28e5540231cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:35:38,050 - agent.ComputerAgent - INFO - Agent: I converted the opened ODS on your Desktop to a CSV with the same base name while LibreOffice was running.\n",
+ "\n",
+ "Command used:\n",
+ "mkdir -p /tmp/lo-temp-profile && libreoffice --headless -env:UserInstallation=file:///tmp/lo-temp-profile --convert-to \"csv:Text - txt - csv (StarCalc):44,34,0\" --outdir ~/Desktop ~/Desktop/file_example_ODS_5000.ods\n",
+ "\n",
+ "Notes:\n",
+ "- A temporary LibreOffice profile was used (-env:UserInstallation=…) so the conversion works even with the GUI instance open.\n",
+ "- The CSV file has been created on your Desktop as file_example_ODS_5000.csv.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I converted the opened ODS on your Desktop to a CSV with the same base name while LibreOffice was running.\n",
+ "\n",
+ "Command used:\n",
+ "mkdir -p /tmp/lo-temp-profile && libreoffice --headless -env:UserInstallation=file:///tmp/lo-temp-profile --convert-to \"csv:Text - txt - csv (StarCalc):44,34,0\" --outdir ~/Desktop ~/Desktop/file_example_ODS_5000.ods\n",
+ "\n",
+ "Notes:\n",
+ "- A temporary LibreOffice profile was used (-env:UserInstallation=…) so the conversion works even with the GUI instance open.\n",
+ "- The CSV file has been created on your Desktop as file_example_ODS_5000.csv.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 18:35:38,700 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 920\n",
+ " - prompt_tokens: 10826\n",
+ " - total_tokens: 11746\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0227\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 920\n",
+ " - prompt_tokens: 10826\n",
+ " - total_tokens: 11746\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 768\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0227\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2350cfc0-8e2e-4302-b87c-df7d35bd6eea/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c268b680-eafe-4b8d-914a-28e5540231cd/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5173/7340 [189:20<79:18, 27.3 steps/min]2025-08-11 18:35:39,393 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5173/7340 [189:22<79:19, 27.3 steps/min]2025-08-11 18:35:41,381 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:35:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/reset \"HTTP/1.1 200 OK\"\n",
+ " 70%|████████████████████████████------------| 5173/7340 [189:24<79:20, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 70%|████████████████████████████------------| 5174/7340 [189:25<79:18, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.67s/it]2025-08-11 18:35:45,241 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:35:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5182/7340 [189:26<78:53, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5afdf327-0d8f-4749-8016-19cb1aedf273/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]27.4 steps/min]2025-08-11 18:35:47,491 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]27.4 steps/min]2025-08-11 18:35:48,319 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:35:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.4 steps/min]\n",
+ " 71%|████████████████████████████------------| 5183/7340 [189:31<78:52, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:35:50,250 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5183/7340 [189:32<78:52, 27.3 steps/min]\u001b[92m18:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:35:50,943 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 713, 'scroll_x': 0, 'x': 716, 'y': 646})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 713, 'scroll_x': 0, 'x': 716, 'y': 646})\n",
+ " 71%|████████████████████████████------------| 5184/7340 [189:34<78:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:35<78:47, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:38<78:49, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:35:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:39<78:49, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 18:35:59,836 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d05e9e78-ad03-41fc-a347-043ec46bd299/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]27.3 steps/min]2025-08-11 18:36:01,386 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:43<78:51, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:36:02,052 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]27.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:46<78:52, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5185/7340 [189:47<78:52, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5186/7340 [189:49<78:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/89880137-9134-4973-9389-b3535802254c/close \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5186/7340 [189:51<78:51, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]27.3 steps/min]\n",
+ "\u001b[92m18:36:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:36:12,164 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:36:12,165 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 428})\n",
+ " 71%|████████████████████████████------------| 5186/7340 [189:54<78:52, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 71%|████████████████████████████------------| 5187/7340 [189:56<78:50, 27.3 steps/min]\u001b[92m18:36:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:36:15,365 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:36:15,367 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 650})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 650})\n",
+ " 71%|████████████████████████████------------| 5188/7340 [189:59<78:48, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:18,079 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:36:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5188/7340 [190:02<78:49, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:21,321 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:36:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5188/7340 [190:04<78:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:24,203 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 71%|████████████████████████████------------| 5188/7340 [190:05<78:51, 27.3 steps/min]2025-08-11 18:36:25,341 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5188/7340 [190:07<78:51, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5188/7340 [190:10<78:52, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:29,780 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'pagedown'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'pagedown'})\n",
+ " 71%|████████████████████████████------------| 5189/7340 [190:12<78:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:36:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5189/7340 [190:13<78:51, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5189/7340 [190:17<78:53, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]\u001b[92m18:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]27.3 steps/min]\n",
+ "2025-08-11 18:36:37,871 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:36:39,406 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 71%|████████████████████████████------------| 5189/7340 [190:21<78:54, 27.3 steps/min]\u001b[92m18:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:36:40,047 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 142})\n",
+ "\u001b[92m18:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:36:40,733 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 629})\n",
+ " 71%|████████████████████████████------------| 5189/7340 [190:22<78:54, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:23<78:18, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0d923fcd-4666-4869-8ad2-17460c904167/close \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:24<78:18, 27.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:26<78:19, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:46,172 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:28<78:20, 27.3 steps/min]2025-08-11 18:36:47,482 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:36:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:30<78:21, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:32<78:21, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:33<78:22, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/497d5104-1e6e-44a9-a164-fec745a337b6/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:36:55,357 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5201/7340 [190:37<78:23, 27.3 steps/min]2025-08-11 18:36:56,039 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:36:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:36:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:36:56,709 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 387})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 387})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/reset \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5202/7340 [190:38<78:21, 27.3 steps/min]2025-08-11 18:36:57,382 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5203/7340 [190:39<78:18, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:36:58,641 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:36:59,325 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 561})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 561})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5203/7340 [190:41<78:19, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:00,501 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5204/7340 [190:42<78:16, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5204/7340 [190:43<78:16, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:03,933 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:37:03,935 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 71%|████████████████████████████------------| 5204/7340 [190:45<78:17, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:37:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:05,291 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:37:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:06,012 - agent.ComputerAgent - INFO - Computer: click({'x': 524, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 524, 'y': 503})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5205/7340 [190:47<78:15, 27.3 steps/min]\u001b[92m18:37:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:06,653 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:37:06,654 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 41})\n",
+ "2025-08-11 18:37:07,313 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:37:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:08,648 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:37:08,649 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 71%|████████████████████████████------------| 5206/7340 [190:50<78:13, 27.3 steps/min]2025-08-11 18:37:09,323 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:37:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5208/7340 [190:55<78:09, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5208/7340 [190:56<78:09, 27.3 steps/min]2025-08-11 18:37:15,613 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:37:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:openai._base_client:Retrying request to /chat/completions in 0.403242 seconds\n",
+ " 71%|████████████████████████████------------| 5208/7340 [190:57<78:10, 27.3 steps/min]2025-08-11 18:37:16,773 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:37:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:18,458 - agent.ComputerAgent - INFO - Computer: type({'text': 'drive.google.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'drive.google.com'})\n",
+ " 71%|████████████████████████████------------| 5208/7340 [191:00<78:11, 27.3 steps/min]2025-08-11 18:37:19,460 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:37:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5209/7340 [191:01<78:08, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5209/7340 [191:02<78:09, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:21,836 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5209/7340 [191:03<78:09, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:22,505 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:37:22,506 - agent.ComputerAgent - INFO - Computer: click({'x': 1006, 'y': 9})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1006, 'y': 9})\n",
+ " 71%|████████████████████████████------------| 5209/7340 [191:04<78:10, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:24,905 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ " 71%|████████████████████████████------------| 5210/7340 [191:06<78:07, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:26,064 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:37:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:37:26,733 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ " 71%|████████████████████████████------------| 5210/7340 [191:08<78:08, 27.3 steps/min]\u001b[92m18:37:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:37:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5210/7340 [191:10<78:09, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:37:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:29,395 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 429})\n",
+ "\u001b[92m18:37:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:30,066 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 45})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 45})\n",
+ "2025-08-11 18:37:30,744 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5210/7340 [191:12<78:10, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:31,387 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 239})\n",
+ " 71%|████████████████████████████------------| 5212/7340 [191:13<78:04, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 71%|████████████████████████████------------| 5213/7340 [191:14<78:01, 27.3 steps/min]\u001b[92m18:37:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:33,218 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:34,578 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5214/7340 [191:16<77:59, 27.3 steps/min]INFO:openai._base_client:Retrying request to /chat/completions in 0.421017 seconds\n",
+ " 71%|████████████████████████████------------| 5215/7340 [191:17<77:56, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:36,730 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:37:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5215/7340 [191:18<77:57, 27.3 steps/min]2025-08-11 18:37:37,733 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:37:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5215/7340 [191:19<77:57, 27.3 steps/min]2025-08-11 18:37:38,404 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:37:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5215/7340 [191:21<77:58, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:40,395 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:37:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:41,082 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 527})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 527})\n",
+ "2025-08-11 18:37:41,718 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:37:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5215/7340 [191:23<77:59, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:42,379 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 502})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 502})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5216/7340 [191:25<77:56, 27.2 steps/min]\u001b[92m18:37:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:44,917 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 139})\n",
+ " 71%|████████████████████████████------------| 5217/7340 [191:26<77:54, 27.3 steps/min]\u001b[92m18:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:45,530 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -1169, 'scroll_x': 0, 'x': 526, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -1169, 'scroll_x': 0, 'x': 526, 'y': 427})\n",
+ " 71%|████████████████████████████------------| 5219/7340 [191:29<77:49, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 71%|████████████████████████████------------| 5219/7340 [191:30<77:49, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:50,126 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ "\u001b[92m18:37:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1064657b-b89a-4eeb-8197-1c110af6b752/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5219/7340 [191:31<77:50, 27.2 steps/min]2025-08-11 18:37:50,787 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 440})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:37:51,454 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:37:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5220/7340 [191:34<77:48, 27.2 steps/min]\u001b[92m18:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:53,444 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:37:54,116 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 10})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3077c8ef-543a-4fa8-b46c-49b632230eed/close \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5223/7340 [191:35<77:39, 27.3 steps/min]\u001b[92m18:37:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:55,819 - agent.ComputerAgent - INFO - Computer: click({'x': 80, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 80, 'y': 430})\n",
+ "2025-08-11 18:37:56,505 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:37:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5224/7340 [191:38<77:37, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:37:57,824 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:39<77:34, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<77:35, 27.3 steps/min]2025-08-11 18:37:59,521 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:42<77:36, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]2025-08-11 18:38:02,455 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:38:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:44<77:36, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:03,878 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:38:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]27.2 steps/min]\n",
+ "2025-08-11 18:38:04,546 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:47<77:37, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:07,175 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:48<77:38, 27.2 steps/min]\u001b[92m18:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:07,849 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 272})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 272})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:38:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:38:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:09,216 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 131})\n",
+ "2025-08-11 18:38:09,888 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -1144, 'scroll_x': 0, 'x': 526, 'y': 501})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -1144, 'scroll_x': 0, 'x': 526, 'y': 501})\n",
+ " 71%|████████████████████████████------------| 5225/7340 [191:51<77:39, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:10,589 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 237})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:12,570 - agent.ComputerAgent - INFO - Computer: type({'text': 'etherpad.wikimedia.org'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'etherpad.wikimedia.org'})\n",
+ " 71%|████████████████████████████------------| 5228/7340 [191:54<77:31, 27.2 steps/min]\u001b[92m18:38:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:13,248 - agent.ComputerAgent - INFO - Computer: click({'x': 988, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 988, 'y': 35})\n",
+ " 71%|████████████████████████████------------| 5237/7340 [191:55<77:04, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/055914cd-07b0-4dcd-9407-c6975b1eccbf/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5238/7340 [191:56<77:01, 27.3 steps/min]2025-08-11 18:38:15,064 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5238/7340 [191:57<77:01, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:16,722 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:38:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5238/7340 [191:59<77:02, 27.3 steps/min]\u001b[92m18:38:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:18,034 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:38:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:00<77:03, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]2025-08-11 18:38:19,866 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:38:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:01<77:03, 27.3 steps/min]2025-08-11 18:38:20,546 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:38:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:21,924 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:03<77:04, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]27.3 steps/min]\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:05<77:05, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:38:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:06<77:05, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:38:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:25,787 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:38:25,788 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 17, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 17, 'y': 385})\n",
+ "\u001b[92m18:38:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:26,472 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 90})\n",
+ " 71%|████████████████████████████------------| 5238/7340 [192:08<77:06, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 71%|████████████████████████████------------| 5240/7340 [192:09<77:00, 27.3 steps/min]\u001b[92m18:38:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:28,364 - agent.ComputerAgent - INFO - Computer: click({'x': 522, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 522, 'y': 503})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5240/7340 [192:10<77:01, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:30,131 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 71%|████████████████████████████------------| 5241/7340 [192:11<76:58, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 71%|████████████████████████████------------| 5242/7340 [192:12<76:55, 27.3 steps/min]\u001b[92m18:38:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:31,939 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 529})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5242/7340 [192:14<76:56, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:33,272 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:38:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:38:34,570 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "2025-08-11 18:38:35,241 - agent.ComputerAgent - INFO - Computer: click({'x': 370, 'y': 595})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 370, 'y': 595})\n",
+ " 71%|████████████████████████████------------| 5243/7340 [192:16<76:54, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:35,865 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:38:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:18<76:51, 27.3 steps/min]2025-08-11 18:38:37,028 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:38:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:37,677 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:38:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:19<76:52, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:20<76:52, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:21<76:53, 27.3 steps/min]2025-08-11 18:38:40,375 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:38:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:22<76:53, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:23<76:53, 27.3 steps/min]2025-08-11 18:38:42,595 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:38:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:25<76:54, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:44,961 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9b006d7b-b853-41ed-8a84-b7eaa5b6e94b/close \"HTTP/1.1 200 OK\"\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:27<76:55, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:46,285 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:38:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:28<76:55, 27.2 steps/min]2025-08-11 18:38:47,641 - agent.ComputerAgent - INFO - Computer: click({'x': 112, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 112, 'y': 331})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:38:48,970 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 71%|████████████████████████████------------| 5244/7340 [192:30<76:56, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.57s/it]27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5246/7340 [192:35<76:52, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "\u001b[92m18:38:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5246/7340 [192:37<76:53, 27.2 steps/min]2025-08-11 18:38:56,523 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:38:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:38:58,507 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ "\u001b[92m18:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 71%|████████████████████████████------------| 5247/7340 [192:41<76:51, 27.2 steps/min]\u001b[92m18:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:00,473 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 457})\n",
+ "\u001b[92m18:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:39:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:01,115 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:01,765 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:39:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:02,440 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 90})\n",
+ "2025-08-11 18:39:03,096 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 77})\n",
+ "2025-08-11 18:39:03,778 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -6581, 'scroll_x': 0, 'x': 986, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -6581, 'scroll_x': 0, 'x': 986, 'y': 416})\n",
+ " 71%|████████████████████████████------------| 5247/7340 [192:45<76:53, 27.2 steps/min]\u001b[92m18:39:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:04,441 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "\u001b[92m18:39:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:05,097 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 281})\n",
+ "2025-08-11 18:39:05,785 - agent.ComputerAgent - INFO - Computer: click({'x': 993, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 993, 'y': 757})\n",
+ " 72%|████████████████████████████------------| 5254/7340 [192:48<76:33, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:07,465 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:39:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5254/7340 [192:49<76:33, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5254/7340 [192:50<76:34, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:39:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:10,331 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 115, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 115, 'y': 91})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 72%|████████████████████████████------------| 5255/7340 [192:52<76:31, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:11,005 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:11,663 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5256/7340 [192:53<76:28, 27.2 steps/min]2025-08-11 18:39:12,336 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:39:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:13,006 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:39:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5256/7340 [192:54<76:29, 27.2 steps/min]2025-08-11 18:39:14,063 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:39:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5256/7340 [192:55<76:29, 27.2 steps/min]2025-08-11 18:39:14,705 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:39:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:15,387 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:39:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5256/7340 [192:57<76:30, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:17,261 - agent.ComputerAgent - INFO - Computer: type({'text': 'etherpad.wikimedia.org'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'etherpad.wikimedia.org'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 72%|████████████████████████████------------| 5258/7340 [193:00<76:25, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:18,968 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:39:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5258/7340 [193:01<76:25, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:20,780 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ " 72%|████████████████████████████------------| 5258/7340 [193:02<76:26, 27.2 steps/min]2025-08-11 18:39:21,456 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:39:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:39:22,096 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5259/7340 [193:03<76:23, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:24,093 - agent.ComputerAgent - INFO - Computer: type({'text': 'Q1\\tQ2\\tQ3\\tQ4\\tQ5\\n10\\t14\\t9\\t16\\t12\\n8\\t6\\t11\\t13\\t15\\n12\\t15\\t13\\t17\\t19\\n7\\t9\\t8\\t10\\t12\\n14\\t13\\t12\\t15\\t18\\n9\\t11\\t10\\t12\\t11\\n16\\t14\\t15\\t13\\t17\\n5\\t7\\t6\\t8\\t9\\n11\\t12\\t14\\t13\\t15'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Q1\\tQ2\\tQ3\\tQ4\\tQ5\\n10\\t14\\t9\\t16\\t12\\n8\\t6\\t11\\t13\\t15\\n12\\t15\\t13\\t17\\t19\\n7\\t9\\t8\\t10\\t12\\n14\\t13\\t12\\t15\\t18\\n9\\t11\\t10\\t12\\t11\\n16\\t14\\t15\\t13\\t17\\n5\\t7\\t6\\t8\\t9\\n11\\t12\\t14\\t13\\t15'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5259/7340 [193:05<76:24, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:24,787 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:39:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:25,427 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:26,105 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:39:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:39:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5260/7340 [193:07<76:22, 27.2 steps/min]2025-08-11 18:39:26,792 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 163})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 163})\n",
+ " 72%|████████████████████████████------------| 5261/7340 [193:10<76:20, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:30,486 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:39:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:32,510 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:39:32,511 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 72%|████████████████████████████------------| 5262/7340 [193:14<76:18, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:39:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:34,496 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:39:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:39:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5263/7340 [193:16<76:16, 27.2 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:35,854 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 272})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 272})\n",
+ "\u001b[92m18:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:36,517 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:39:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:37,211 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 153})\n",
+ "2025-08-11 18:39:37,885 - agent.ComputerAgent - INFO - Computer: click({'x': 982, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 982, 'y': 760})\n",
+ "\u001b[92m18:39:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5263/7340 [193:19<76:17, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:39:38,554 - agent.ComputerAgent - INFO - Computer: click({'x': 595, 'y': 265})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 595, 'y': 265})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:39,899 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "2025-08-11 18:39:40,548 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:39:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5267/7340 [193:22<76:06, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:41,890 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ "2025-08-11 18:39:42,537 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:39:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5268/7340 [193:26<76:04, 27.2 steps/min]\u001b[92m18:39:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:45,139 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 72%|████████████████████████████------------| 5268/7340 [193:27<76:05, 27.2 steps/min]\u001b[92m18:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:46,335 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 351})\n",
+ "\u001b[92m18:39:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:47,002 - agent.ComputerAgent - INFO - Computer: click({'x': 780, 'y': 350})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 780, 'y': 350})\n",
+ "\u001b[92m18:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5268/7340 [193:28<76:05, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:47,637 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 72%|████████████████████████████------------| 5270/7340 [193:29<76:00, 27.2 steps/min]\u001b[92m18:39:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:48,808 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 429}, {'x': 991, 'y': 416}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 429}, {'x': 991, 'y': 416}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5270/7340 [193:30<76:00, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5272/7340 [193:31<75:54, 27.2 steps/min]2025-08-11 18:39:50,465 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:39:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:39:51,138 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:39:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5272/7340 [193:32<75:55, 27.2 steps/min]2025-08-11 18:39:51,846 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:39:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:52,516 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:39:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5272/7340 [193:34<75:55, 27.2 steps/min]2025-08-11 18:39:53,181 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:53,885 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5272/7340 [193:35<75:56, 27.2 steps/min]2025-08-11 18:39:54,924 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:39:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:39:55,597 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:39:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 72%|████████████████████████████------------| 5273/7340 [193:38<75:54, 27.2 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:39:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:39:57,578 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:39:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5273/7340 [193:39<75:54, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:39:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:58,638 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 137})\n",
+ "\u001b[92m18:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5273/7340 [193:40<75:55, 27.2 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:39:59,313 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 136})\n",
+ " 72%|████████████████████████████------------| 5294/7340 [193:41<74:51, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5295/7340 [193:42<74:48, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:40:01,658 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5295/7340 [193:45<74:50, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:40:05,537 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:40:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 18:40:07,029 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:40:07,030 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5295/7340 [193:49<74:51, 27.3 steps/min]\u001b[92m18:40:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]2025-08-11 18:40:09,274 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 18:40:10,772 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:12,103 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:13,420 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5296/7340 [193:56<74:51, 27.3 steps/min]\u001b[92m18:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:15,395 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:40:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:16,061 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5298/7340 [193:59<74:46, 27.3 steps/min]2025-08-11 18:40:18,055 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 110, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 110, 'y': 331})\n",
+ "2025-08-11 18:40:18,750 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 760})\n",
+ "\u001b[92m18:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5298/7340 [194:00<74:46, 27.3 steps/min]\u001b[92m18:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:19,378 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:40:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:20,046 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 381})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 381})\n",
+ "2025-08-11 18:40:20,690 - agent.ComputerAgent - INFO - Computer: click({'x': 572, 'y': 300})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 572, 'y': 300})\n",
+ "\u001b[92m18:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5300/7340 [194:02<74:41, 27.3 steps/min]2025-08-11 18:40:21,334 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 529})\n",
+ "2025-08-11 18:40:21,965 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5302/7340 [194:04<74:35, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:40:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:23,806 - agent.ComputerAgent - INFO - Computer: click({'x': 238, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 238, 'y': 176})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/001a8806-0d90-4ba3-85f6-0677177fc24e/close \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:06<74:27, 27.3 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:08<74:28, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:40:28,176 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:40:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:09<74:28, 27.3 steps/min]2025-08-11 18:40:28,850 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:40:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:29,510 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:40:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:30,149 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:11<74:29, 27.3 steps/min]2025-08-11 18:40:30,828 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:32,190 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:40:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:13<74:30, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:32,820 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:40:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:15<74:30, 27.3 steps/min]2025-08-11 18:40:34,139 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:40:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:35,059 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]\u001b[92m18:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:16<74:31, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:35,724 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:21<74:33, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 72%|████████████████████████████------------| 5305/7340 [194:22<74:33, 27.3 steps/min]\u001b[92m18:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:40,741 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 65})\n",
+ "\u001b[92m18:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:41,414 - agent.ComputerAgent - INFO - Computer: click({'x': 58, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 58, 'y': 133})\n",
+ "\u001b[92m18:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:42,064 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 427})\n",
+ " 72%|████████████████████████████------------| 5306/7340 [194:23<74:31, 27.3 steps/min]\u001b[92m18:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:42,700 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 131})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:40:43,388 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:40:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:45,385 - agent.ComputerAgent - INFO - Computer: type({'text': 'settings'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'settings'})\n",
+ " 72%|████████████████████████████------------| 5308/7340 [194:27<74:26, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:46,569 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 436})\n",
+ " 72%|████████████████████████████------------| 5311/7340 [194:29<74:18, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5311/7340 [194:30<74:18, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:49,559 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 101, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 101, 'y': 390})\n",
+ "\u001b[92m18:40:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:50,211 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:50,846 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 166})\n",
+ " 72%|████████████████████████████------------| 5311/7340 [194:32<74:19, 27.3 steps/min]2025-08-11 18:40:51,518 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:52,158 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5313/7340 [194:33<74:13, 27.3 steps/min]2025-08-11 18:40:52,858 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:40:54,545 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:40:56,518 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ " 72%|████████████████████████████------------| 5313/7340 [194:38<74:15, 27.3 steps/min]2025-08-11 18:40:57,189 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:40:57,830 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:40:58,486 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:40:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5313/7340 [194:41<74:16, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:00,459 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:41:00,460 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5313/7340 [194:42<74:17, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:01,789 - agent.ComputerAgent - INFO - Computer: click({'x': 218, 'y': 73})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 218, 'y': 73})\n",
+ "2025-08-11 18:41:02,419 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:41:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:03,118 - agent.ComputerAgent - INFO - Computer: click({'x': 707, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 707, 'y': 75})\n",
+ "\u001b[92m18:41:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:04,436 - agent.ComputerAgent - INFO - Computer: type({'text': 'A1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'A1'})\n",
+ " 72%|████████████████████████████------------| 5314/7340 [194:46<74:15, 27.3 steps/min]2025-08-11 18:41:05,106 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 420})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5317/7340 [194:47<74:06, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:06,394 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:41:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:07,435 - agent.ComputerAgent - INFO - Computer: double_click({'x': 388, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 388, 'y': 128})\n",
+ " 72%|████████████████████████████------------| 5318/7340 [194:49<74:04, 27.3 steps/min]2025-08-11 18:41:08,118 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:10,549 - agent.ComputerAgent - INFO - Computer: type({'text': '120%'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '120%'})\n",
+ " 72%|████████████████████████████------------| 5319/7340 [194:52<74:02, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:11,727 - agent.ComputerAgent - INFO - Computer: click({'x': 620, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 620, 'y': 427})\n",
+ " 72%|████████████████████████████------------| 5320/7340 [194:53<73:59, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:12,893 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 72%|████████████████████████████------------| 5321/7340 [194:54<73:57, 27.3 steps/min]2025-08-11 18:41:13,545 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:41:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:14,227 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5321/7340 [194:56<73:58, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:16,217 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://etherpad.wikimedia.org'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://etherpad.wikimedia.org'})\n",
+ "\u001b[92m18:41:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 72%|████████████████████████████------------| 5321/7340 [194:57<73:58, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:16,863 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 89})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5322/7340 [194:59<73:56, 27.3 steps/min]\u001b[92m18:41:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:19,522 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 18:41:20,207 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 53})\n",
+ "\u001b[92m18:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5323/7340 [195:01<73:54, 27.3 steps/min]2025-08-11 18:41:20,869 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:21,555 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5325/7340 [195:03<73:48, 27.3 steps/min]2025-08-11 18:41:22,570 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:23,249 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:41:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:24,597 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 73%|█████████████████████████████-----------| 5326/7340 [195:06<73:46, 27.3 steps/min]2025-08-11 18:41:25,218 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:41:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:25,900 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:41:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5327/7340 [195:07<73:44, 27.3 steps/min]2025-08-11 18:41:26,619 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5327/7340 [195:08<73:44, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:41:27,774 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:41:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5327/7340 [195:10<73:45, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:29,969 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:41:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:31,342 - agent.ComputerAgent - INFO - Computer: type({'text': 'site:arxiv-daily.com \"Oct 11, 2023\" language model'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'site:arxiv-daily.com \"Oct 11, 2023\" language model'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5327/7340 [195:13<73:46, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5328/7340 [195:14<73:43, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:33,651 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:41:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:34,308 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:41:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:34,961 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:41:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:35,630 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:41:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:37,441 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5328/7340 [195:19<73:45, 27.3 steps/min]\u001b[92m18:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:38,751 - agent.ComputerAgent - INFO - Computer: click({'x': 637, 'y': 470})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 637, 'y': 470})\n",
+ "2025-08-11 18:41:39,382 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 302})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 302})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:41:40,712 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5328/7340 [195:23<73:46, 27.3 steps/min]\u001b[92m18:41:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:42,087 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 10})\n",
+ "\u001b[92m18:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:42,752 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:41:42,753 - agent.ComputerAgent - INFO - Computer: click({'x': 96, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 96, 'y': 53})\n",
+ "\u001b[92m18:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5330/7340 [195:24<73:41, 27.3 steps/min]2025-08-11 18:41:43,412 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:44,037 - agent.ComputerAgent - INFO - Computer: click({'x': 170, 'y': 687})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 170, 'y': 687})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5332/7340 [195:26<73:36, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:41:46,012 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 73%|█████████████████████████████-----------| 5333/7340 [195:27<73:33, 27.3 steps/min]\u001b[92m18:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:46,658 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ " 73%|█████████████████████████████-----------| 5335/7340 [195:29<73:28, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5335/7340 [195:31<73:29, 27.3 steps/min]\u001b[92m18:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:50,682 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:51,972 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ "\u001b[92m18:41:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5335/7340 [195:33<73:29, 27.3 steps/min]2025-08-11 18:41:52,612 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:53,291 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5336/7340 [195:35<73:27, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:41:53,976 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:41:54,658 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:41:54,659 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 13})\n",
+ "\u001b[92m18:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5336/7340 [195:36<73:27, 27.3 steps/min]2025-08-11 18:41:55,318 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 963, 'y': 760}, {'x': 966, 'y': 760}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 963, 'y': 760}, {'x': 966, 'y': 760}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:56,622 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:41:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5337/7340 [195:38<73:25, 27.3 steps/min]\u001b[92m18:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:41:57,892 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5338/7340 [195:41<73:23, 27.3 steps/min]\u001b[92m18:41:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:00,338 - agent.ComputerAgent - INFO - Computer: move({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 961, 'y': 760})\n",
+ "\u001b[92m18:42:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:00,971 - agent.ComputerAgent - INFO - Computer: click({'x': 836, 'y': 382})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 836, 'y': 382})\n",
+ " 73%|█████████████████████████████-----------| 5338/7340 [195:42<73:24, 27.3 steps/min]2025-08-11 18:42:01,618 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 271})\n",
+ "\u001b[92m18:42:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:02,939 - agent.ComputerAgent - INFO - Computer: click({'x': 192, 'y': 32})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 192, 'y': 32})\n",
+ " 73%|█████████████████████████████-----------| 5340/7340 [195:44<73:18, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:03,601 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:42:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:04,253 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5342/7340 [195:46<73:13, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:05,816 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 93, 'y': 176}, {'x': 382, 'y': 175}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 93, 'y': 176}, {'x': 382, 'y': 175}]})\n",
+ " 73%|█████████████████████████████-----------| 5343/7340 [195:48<73:11, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:42:08,009 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:42:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5343/7340 [195:49<73:11, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5343/7340 [195:50<73:12, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:09,862 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:42:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:10,592 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5343/7340 [195:53<73:12, 27.3 steps/min]2025-08-11 18:42:12,266 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:12,955 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 99})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 99})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5343/7340 [195:55<73:13, 27.3 steps/min]\u001b[92m18:42:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:14,312 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:14,980 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:15,632 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:16,296 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 124, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 124, 'y': 332})\n",
+ "\u001b[92m18:42:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5344/7340 [195:58<73:11, 27.3 steps/min]\u001b[92m18:42:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:17,603 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 108})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 108})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5345/7340 [196:00<73:09, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:19,534 - agent.ComputerAgent - INFO - Computer: click({'x': 993, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 993, 'y': 757})\n",
+ "\u001b[92m18:42:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:20,151 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:42:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:20,854 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 132})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 132})\n",
+ "\u001b[92m18:42:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5346/7340 [196:02<73:07, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:21,468 - agent.ComputerAgent - INFO - Computer: click({'x': 131, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 131, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:42:22,798 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://etherpad.wikimedia.org'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://etherpad.wikimedia.org'})\n",
+ " 73%|█████████████████████████████-----------| 5348/7340 [196:04<73:01, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5350/7340 [196:05<72:56, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:24,643 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:25,290 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5350/7340 [196:07<72:57, 27.3 steps/min]\u001b[92m18:42:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:42:27,264 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5351/7340 [196:10<72:55, 27.3 steps/min]\u001b[92m18:42:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:29,291 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:42:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:29,935 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 586, 'x': 20, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 586, 'x': 20, 'y': 13})\n",
+ "\u001b[92m18:42:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:30,633 - agent.ComputerAgent - INFO - Computer: click({'x': 653, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 653, 'y': 142})\n",
+ "2025-08-11 18:42:31,293 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:42:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5352/7340 [196:13<72:53, 27.3 steps/min]2025-08-11 18:42:31,959 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:42:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:32,612 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:42:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:33,290 - agent.ComputerAgent - INFO - Computer: click({'x': 596, 'y': 535})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 596, 'y': 535})\n",
+ "2025-08-11 18:42:33,967 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:35,021 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:42:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5353/7340 [196:16<72:51, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:42:36,025 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5354/7340 [196:18<72:49, 27.3 steps/min]\u001b[92m18:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:42:37,377 - agent.ComputerAgent - INFO - LLM processing started with 15 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 15 messages\n",
+ "\u001b[92m18:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:38,076 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 90})\n",
+ " 73%|█████████████████████████████-----------| 5354/7340 [196:19<72:49, 27.3 steps/min]2025-08-11 18:42:38,718 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:42:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5355/7340 [196:20<72:46, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:42:40,561 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5356/7340 [196:22<72:44, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:42:41,236 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:42,570 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:42:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5356/7340 [196:24<72:45, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:43,227 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:43,905 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:45,944 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5356/7340 [196:28<72:46, 27.3 steps/min]\u001b[92m18:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:47,296 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 110, 'y': 175}, {'x': 258, 'y': 175}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 110, 'y': 175}, {'x': 258, 'y': 175}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:48,622 - agent.ComputerAgent - INFO - Computer: type({'text': 'site:arxiv-daily.com \"Oct 11, 2023\" arxiv daily foundation models'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'site:arxiv-daily.com \"Oct 11, 2023\" arxiv daily foundation models'})\n",
+ "\u001b[92m18:42:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5358/7340 [196:30<72:41, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:49,303 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 315})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 315})\n",
+ "2025-08-11 18:42:49,953 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:42:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:42:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5360/7340 [196:32<72:36, 27.3 steps/min]2025-08-11 18:42:51,293 - agent.ComputerAgent - INFO - Computer: click({'x': 166, 'y': 404})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 166, 'y': 404})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:54,020 - agent.ComputerAgent - INFO - Computer: type({'text': 'Auto Save Delay'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Auto Save Delay'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5361/7340 [196:36<72:34, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:42:55,342 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:42:55,344 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 385})\n",
+ "\u001b[92m18:42:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:56,679 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 172})\n",
+ "\u001b[92m18:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5363/7340 [196:39<72:29, 27.3 steps/min]\u001b[92m18:42:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:58,001 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 590})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 590})\n",
+ "2025-08-11 18:42:58,657 - agent.ComputerAgent - INFO - Computer: click({'x': 354, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 354, 'y': 148})\n",
+ "\u001b[92m18:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:42:59,958 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:00,650 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 44})\n",
+ "\u001b[92m18:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5365/7340 [196:42<72:24, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:01,332 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 115})\n",
+ "\u001b[92m18:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:01,979 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 731})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5368/7340 [196:43<72:16, 27.3 steps/min]2025-08-11 18:43:02,638 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:43:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5370/7340 [196:46<72:11, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:06,408 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:43:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5371/7340 [196:48<72:08, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:07,096 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:08,822 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "2025-08-11 18:43:09,498 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:43:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5371/7340 [196:51<72:09, 27.3 steps/min]2025-08-11 18:43:10,521 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:43:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:11,856 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:43:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:12,507 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:43:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:13,170 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5371/7340 [196:54<72:11, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:13,845 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:14,507 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:43:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:15,168 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:15,818 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:17,185 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 73%|█████████████████████████████-----------| 5372/7340 [196:58<72:09, 27.3 steps/min]2025-08-11 18:43:17,868 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:43:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:18,539 - agent.ComputerAgent - INFO - Computer: click({'x': 600, 'y': 535})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 600, 'y': 535})\n",
+ "2025-08-11 18:43:19,160 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5373/7340 [197:00<72:07, 27.3 steps/min]2025-08-11 18:43:19,808 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:43:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:20,883 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:43:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5374/7340 [197:02<72:05, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:21,567 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:43:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5374/7340 [197:04<72:05, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:23,406 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 86})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 86})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:24,784 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 73%|█████████████████████████████-----------| 5374/7340 [197:06<72:06, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:25,466 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:43:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5377/7340 [197:07<71:57, 27.3 steps/min]2025-08-11 18:43:26,158 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:43:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 73%|█████████████████████████████-----------| 5377/7340 [197:08<71:58, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:27,307 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:43:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5377/7340 [197:09<71:58, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:29,161 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 136})\n",
+ " 73%|█████████████████████████████-----------| 5378/7340 [197:11<71:56, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:30,828 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5379/7340 [197:13<71:54, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:32,138 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:43:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:43:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:32,794 - agent.ComputerAgent - INFO - Computer: click({'x': 321, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 321, 'y': 153})\n",
+ " 73%|█████████████████████████████-----------| 5379/7340 [197:14<71:54, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:34,502 - agent.ComputerAgent - INFO - Computer: type({'text': 'COMPANY'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'COMPANY'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:43:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5380/7340 [197:17<71:52, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:43:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:37,151 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 381})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 381})\n",
+ "2025-08-11 18:43:37,829 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5382/7340 [197:19<71:47, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:38,539 - agent.ComputerAgent - INFO - Computer: click({'x': 993, 'y': 757})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 993, 'y': 757})\n",
+ "2025-08-11 18:43:39,212 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 529})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5383/7340 [197:21<71:45, 27.3 steps/min]\u001b[92m18:43:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:43:41,820 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m18:43:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5385/7340 [197:23<71:39, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:42,479 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:43,142 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "2025-08-11 18:43:43,809 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:43:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:45,198 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ " 73%|█████████████████████████████-----------| 5385/7340 [197:26<71:40, 27.3 steps/min]\u001b[92m18:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5387/7340 [197:28<71:35, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:47,634 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 247})\n",
+ "\u001b[92m18:43:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:48,299 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:43:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:48,968 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:49,657 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:43:50,322 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 248})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5387/7340 [197:32<71:37, 27.3 steps/min]\u001b[92m18:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:52,713 - agent.ComputerAgent - INFO - Computer: click({'x': 494, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 494, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:54,451 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 73%|█████████████████████████████-----------| 5389/7340 [197:36<71:32, 27.3 steps/min]\u001b[92m18:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:55,804 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 210})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 210})\n",
+ "\u001b[92m18:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 73%|█████████████████████████████-----------| 5390/7340 [197:37<71:29, 27.3 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:43:56,446 - agent.ComputerAgent - INFO - Computer: click({'x': 518, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 518, 'y': 456})\n",
+ "\u001b[92m18:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:57,069 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 44})\n",
+ "2025-08-11 18:43:57,739 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 59, 'y': 178}, {'x': 256, 'y': 177}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 59, 'y': 178}, {'x': 256, 'y': 177}]})\n",
+ "2025-08-11 18:43:58,404 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:43:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 73%|█████████████████████████████-----------| 5391/7340 [197:40<71:28, 27.3 steps/min]\u001b[92m18:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:43:59,779 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 73%|█████████████████████████████-----------| 5394/7340 [197:41<71:19, 27.3 steps/min]2025-08-11 18:44:00,469 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:44:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:01,159 - agent.ComputerAgent - INFO - Computer: click({'x': 366, 'y': 352})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 366, 'y': 352})\n",
+ " 74%|█████████████████████████████-----------| 5395/7340 [197:43<71:17, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5396/7340 [197:44<71:14, 27.3 steps/min]2025-08-11 18:44:03,809 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:04,525 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:44:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5396/7340 [197:46<71:15, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:05,227 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:05,908 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5396/7340 [197:48<71:16, 27.3 steps/min]\u001b[92m18:44:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:07,899 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:44:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:08,587 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:44:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:11,274 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m18:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 74%|█████████████████████████████-----------| 5397/7340 [197:53<71:14, 27.3 steps/min]\u001b[92m18:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:12,599 - agent.ComputerAgent - INFO - Computer: click({'x': 861, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 861, 'y': 182})\n",
+ "2025-08-11 18:44:13,259 - agent.ComputerAgent - INFO - Computer: click({'x': 556, 'y': 451})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 556, 'y': 451})\n",
+ "\u001b[92m18:44:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:44:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:14,601 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:15,282 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 982, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 982, 'y': 760})\n",
+ " 74%|█████████████████████████████-----------| 5397/7340 [197:57<71:15, 27.3 steps/min]\u001b[92m18:44:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:15,920 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 13})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 13})\n",
+ "2025-08-11 18:44:16,569 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:44:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:17,233 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 115})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:19,876 - agent.ComputerAgent - INFO - Computer: type({'text': '500'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '500'})\n",
+ " 74%|█████████████████████████████-----------| 5420/7340 [198:01<70:08, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:20,559 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 72})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 72})\n",
+ "\u001b[92m18:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:21,230 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "2025-08-11 18:44:21,899 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:44:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5423/7340 [198:03<70:00, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:23,203 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:44:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<69:55, 27.4 steps/min]2025-08-11 18:44:23,875 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:44:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5425/7340 [198:06<69:55, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 74%|█████████████████████████████-----------| 5426/7340 [198:09<69:53, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 18:44:28,919 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:44:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5426/7340 [198:10<69:54, 27.4 steps/min]2025-08-11 18:44:29,655 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:44:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:30,932 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:44:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:31,621 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:32,270 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:44:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:32,922 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:44:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5426/7340 [198:15<69:56, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:34,890 - agent.ComputerAgent - INFO - Computer: type({'text': 'Team-Solar-System-Doc'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Team-Solar-System-Doc'})\n",
+ "2025-08-11 18:44:35,896 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 136})\n",
+ "2025-08-11 18:44:36,573 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:37,926 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 74%|█████████████████████████████-----------| 5427/7340 [198:19<69:54, 27.4 steps/min]\u001b[92m18:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:39,275 - agent.ComputerAgent - INFO - Computer: click({'x': 140, 'y': 350})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 140, 'y': 350})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:39,930 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:44:40,615 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5430/7340 [198:22<69:46, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:41,306 - agent.ComputerAgent - INFO - Computer: click({'x': 409, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 409, 'y': 115})\n",
+ "2025-08-11 18:44:41,971 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:44:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5431/7340 [198:23<69:44, 27.4 steps/min]2025-08-11 18:44:42,662 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:44:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5432/7340 [198:24<69:41, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:43,878 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:44:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5432/7340 [198:25<69:41, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 74%|█████████████████████████████-----------| 5432/7340 [198:26<69:42, 27.4 steps/min]\u001b[92m18:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:45,634 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 115, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 115, 'y': 92})\n",
+ " 74%|█████████████████████████████-----------| 5432/7340 [198:27<69:42, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:44:47,700 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5434/7340 [198:29<69:37, 27.4 steps/min]2025-08-11 18:44:49,653 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5434/7340 [198:32<69:38, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:51,680 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:44:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5434/7340 [198:34<69:38, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:53,025 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 385})\n",
+ "2025-08-11 18:44:53,662 - agent.ComputerAgent - INFO - Computer: click({'x': 832, 'y': 479})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 832, 'y': 479})\n",
+ "\u001b[92m18:44:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5434/7340 [198:35<69:39, 27.4 steps/min]2025-08-11 18:44:54,373 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 760})\n",
+ "2025-08-11 18:44:55,056 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:44:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5436/7340 [198:39<69:34, 27.4 steps/min]\u001b[92m18:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:44:58,412 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:59,058 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 181})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:44:59,721 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5437/7340 [198:41<69:32, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:01,092 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 18:45:01,735 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 71})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 71})\n",
+ "2025-08-11 18:45:02,463 - agent.ComputerAgent - INFO - Computer: click({'x': 134, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 134, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:45:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5439/7340 [198:44<69:27, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:03,864 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:45:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:04,625 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 289})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 289})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:45:05,991 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 74%|█████████████████████████████-----------| 5441/7340 [198:47<69:22, 27.4 steps/min]2025-08-11 18:45:06,661 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:07,981 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:08,670 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 405})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 405})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 74%|█████████████████████████████-----------| 5442/7340 [198:51<69:21, 27.4 steps/min]\u001b[92m18:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:45:10,029 - agent.ComputerAgent - INFO - Computer: click({'x': 321, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 321, 'y': 92})\n",
+ "\u001b[92m18:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 74%|█████████████████████████████-----------| 5443/7340 [198:52<69:18, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:11,616 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 180}, {'x': 426, 'y': 178}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 180}, {'x': 426, 'y': 178}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5444/7340 [198:53<69:16, 27.4 steps/min]2025-08-11 18:45:12,270 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:12,927 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:14,243 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ " 74%|█████████████████████████████-----------| 5445/7340 [198:55<69:14, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:16,281 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5446/7340 [198:58<69:11, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:45:16,925 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:17,600 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:18,221 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:18,870 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:19,531 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:45:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5446/7340 [199:01<69:13, 27.4 steps/min]2025-08-11 18:45:20,898 - agent.ComputerAgent - INFO - Computer: click({'x': 429, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 429, 'y': 211})\n",
+ "2025-08-11 18:45:21,575 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 44})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 44})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5446/7340 [199:03<69:13, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:45:22,219 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:22,868 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:45:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:23,581 - agent.ComputerAgent - INFO - Computer: click({'x': 611, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 611, 'y': 457})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5448/7340 [199:05<69:08, 27.4 steps/min]2025-08-11 18:45:24,229 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:45:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5449/7340 [199:07<69:06, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 74%|█████████████████████████████-----------| 5449/7340 [199:08<69:06, 27.4 steps/min]2025-08-11 18:45:27,084 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:45:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:28,427 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f401a79d-adad-434a-bfd4-3cedfc7a51ad/close \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5449/7340 [199:10<69:07, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:29,729 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 140})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 140})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5450/7340 [199:11<69:04, 27.4 steps/min]2025-08-11 18:45:30,391 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:31,070 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5451/7340 [199:12<69:02, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:32,772 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:45:34,138 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 74%|█████████████████████████████-----------| 5451/7340 [199:15<69:03, 27.4 steps/min]2025-08-11 18:45:35,670 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:45:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5451/7340 [199:17<69:03, 27.4 steps/min]2025-08-11 18:45:36,307 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:45:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:38,739 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://arxiv-daily.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://arxiv-daily.com'})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<69:04, 27.3 steps/min]2025-08-11 18:45:39,409 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:45:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:45:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:21<69:02, 27.3 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.70s/it]2025-08-11 18:45:41,339 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]27.3 steps/min]2025-08-11 18:45:42,698 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:45:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:24<69:03, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.3 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:26<69:03, 27.3 steps/min]2025-08-11 18:45:45,755 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:45:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:28<69:04, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:29<69:04, 27.3 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:45:48,287 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 52})\n",
+ "2025-08-11 18:45:48,920 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 115, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 115, 'y': 90})\n",
+ "\u001b[92m18:45:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:50,266 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:45:51,552 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+v'})\n",
+ " 74%|█████████████████████████████-----------| 5452/7340 [199:33<69:06, 27.3 steps/min]2025-08-11 18:45:52,222 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 136})\n",
+ "2025-08-11 18:45:52,917 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 436})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:55,541 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:45:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:45:57,541 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:45:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:45:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:00,182 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 74%|█████████████████████████████-----------| 5454/7340 [199:41<69:03, 27.3 steps/min]2025-08-11 18:46:00,852 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:02,221 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 18:46:02,872 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:46:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:03,977 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 245})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 245})\n",
+ "2025-08-11 18:46:04,654 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 451})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 451})\n",
+ "2025-08-11 18:46:05,312 - agent.ComputerAgent - INFO - Computer: click({'x': 748, 'y': 728})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 748, 'y': 728})\n",
+ "2025-08-11 18:46:05,983 - agent.ComputerAgent - INFO - Computer: click({'x': 528, 'y': 395})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 528, 'y': 395})\n",
+ "\u001b[92m18:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 74%|█████████████████████████████-----------| 5456/7340 [199:48<68:59, 27.3 steps/min]\u001b[92m18:46:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:07,349 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 125})\n",
+ "2025-08-11 18:46:08,014 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 90})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 75%|█████████████████████████████-----------| 5481/7340 [199:49<67:46, 27.4 steps/min]\u001b[92m18:46:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:09,302 - agent.ComputerAgent - INFO - Computer: click({'x': 294, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 294, 'y': 184})\n",
+ " 75%|█████████████████████████████-----------| 5483/7340 [199:51<67:41, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5484/7340 [199:55<67:39, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:14,540 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:46:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5489/7340 [199:56<67:25, 27.5 steps/min]2025-08-11 18:46:15,212 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:46:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/01670e8a-9251-451a-92ad-d842f073c97a/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:15,913 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:46:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:16,918 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5491/7340 [199:59<67:20, 27.5 steps/min]2025-08-11 18:46:18,303 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:18,952 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5491/7340 [200:00<67:21, 27.5 steps/min]2025-08-11 18:46:19,637 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:46:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:20,275 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5491/7340 [200:02<67:21, 27.5 steps/min]2025-08-11 18:46:20,952 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:46:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:21,611 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:46:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a6ead00-3730-4f34-9acb-3c8109ec140a/close \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5491/7340 [200:04<67:22, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:25,059 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(B3:E3)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(B3:E3)'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5491/7340 [200:06<67:23, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6010cd2c-5fad-4a91-8ba6-9ed2a34b6453/close \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:09<67:18, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:11<67:18, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "ERROR:asyncio:Unclosed client session\n",
+ "client_session: \n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:46:31,053 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:12<67:19, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]\u001b[92m18:46:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:14<67:19, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]27.4 steps/min]\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:18<67:21, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:38,090 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 75%|█████████████████████████████-----------| 5493/7340 [200:20<67:21, 27.4 steps/min]\u001b[92m18:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:39,445 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 185})\n",
+ "\u001b[92m18:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:40,131 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ "2025-08-11 18:46:40,798 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:42,803 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:44,132 - agent.ComputerAgent - INFO - Computer: type({'text': 'arxiv-daily.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'arxiv-daily.com'})\n",
+ " 75%|█████████████████████████████-----------| 5494/7340 [200:25<67:20, 27.4 steps/min]2025-08-11 18:46:44,809 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 124})\n",
+ "\u001b[92m18:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:45,473 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:46:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:46,150 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 241})\n",
+ " 75%|█████████████████████████████-----------| 5498/7340 [200:27<67:09, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|█████████████████████████████-----------| 5500/7340 [200:29<67:04, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:46:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:46:48,687 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:46:50,065 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 75%|█████████████████████████████-----------| 5500/7340 [200:31<67:05, 27.4 steps/min]2025-08-11 18:46:50,723 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 654, 'x': 769, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 654, 'x': 769, 'y': 95})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:51,352 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:46:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5501/7340 [200:33<67:02, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:52,503 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:46:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:34<67:00, 27.4 steps/min]2025-08-11 18:46:53,150 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:46:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:53,863 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:46:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:54,904 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:46:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:36<67:00, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:55,569 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:46:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:46:56,262 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:46:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:38<67:01, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:46:57,472 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:46:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:39<67:01, 27.4 steps/min]2025-08-11 18:46:58,132 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:40<67:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:41<67:02, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a9dd85a-f951-495e-aea0-d3864853591e/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:47:01,015 - agent.ComputerAgent - INFO - Computer: click({'x': 317, 'y': 536})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 317, 'y': 536})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|█████████████████████████████-----------| 5502/7340 [200:42<67:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/308e9db5-e6b1-4244-824c-6ce22d6cfc64/close \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:43<66:45, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:45<66:46, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:05,023 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:46<66:46, 27.4 steps/min]2025-08-11 18:47:06,222 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:47:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:47<66:47, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:48<66:47, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:08,578 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:50<66:48, 27.4 steps/min]2025-08-11 18:47:09,720 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:47:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:47:10,390 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:52<66:48, 27.4 steps/min]\u001b[92m18:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:55<66:49, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<66:50, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:58<66:50, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [200:59<66:51, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.31s/it]\n",
+ "2025-08-11 18:47:22,087 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [201:05<66:53, 27.4 steps/min]\u001b[92m18:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:47:24,753 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:47:25,449 - agent.ComputerAgent - INFO - Computer: click({'x': 367, 'y': 97})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 367, 'y': 97})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:47:26,115 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 153})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:47:26,755 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 124})\n",
+ "\u001b[92m18:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5508/7340 [201:08<66:54, 27.4 steps/min]\u001b[92m18:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:47:27,780 - agent.ComputerAgent - INFO - Computer: click({'x': 861, 'y': 159})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 861, 'y': 159})\n",
+ "2025-08-11 18:47:28,460 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "2025-08-11 18:47:29,125 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -649, 'scroll_x': 0, 'x': 721, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -649, 'scroll_x': 0, 'x': 721, 'y': 95})\n",
+ "2025-08-11 18:47:29,805 - agent.ComputerAgent - INFO - Computer: click({'x': 253, 'y': 179})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 253, 'y': 179})\n",
+ "2025-08-11 18:47:30,858 - agent.ComputerAgent - INFO - Computer: click({'x': 602, 'y': 461})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 602, 'y': 461})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5511/7340 [201:13<66:46, 27.4 steps/min]\u001b[92m18:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:47:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:47:32,737 - agent.ComputerAgent - INFO - Computer: click({'x': 352, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 352, 'y': 347})\n",
+ " 75%|██████████████████████████████----------| 5516/7340 [201:14<66:32, 27.4 steps/min]\u001b[92m18:47:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:47:33,386 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 461, 'y': 213}, {'x': 432, 'y': 282}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 461, 'y': 213}, {'x': 432, 'y': 282}]})\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:17<66:27, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:36,602 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:47:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:37,253 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:47:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:19<66:28, 27.4 steps/min]2025-08-11 18:47:37,924 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:47:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:47:38,575 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:20<66:28, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:39,254 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:47:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:40,317 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:22<66:29, 27.4 steps/min]2025-08-11 18:47:40,984 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:47:41,657 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:23<66:29, 27.4 steps/min]2025-08-11 18:47:42,732 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:47:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:24<66:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:25<66:30, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:46,003 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1fef1c7a-93ef-4a63-b067-399dfc4ff08a/close \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:29<66:31, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:47:50,568 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt'})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5518/7340 [201:32<66:33, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5524/7340 [201:33<66:15, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e42da596-e101-4fd3-9dea-8a1d63615dad/close \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5524/7340 [201:35<66:16, 27.4 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5524/7340 [201:36<66:16, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:47:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5524/7340 [201:37<66:16, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5524/7340 [201:38<66:17, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.57s/it]27.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:00,230 - agent.ComputerAgent - INFO - Computer: type({'text': 'Python'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Python'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:42<66:01, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b3df65c5-9d1c-44fd-b9bb-37f1f0cd64dc/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:02,759 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:48:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:44<66:01, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.83s/it]\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:45<66:02, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:46<66:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<66:03, 27.4 steps/min]2025-08-11 18:48:07,641 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:48:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:49<66:03, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:51<66:04, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:03, 1.96s/it]27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:53<66:04, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.79s/it]\u001b[92m18:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.54s/it]27.4 steps/min]\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:55<66:05, 27.4 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 75%|██████████████████████████████----------| 5530/7340 [201:56<66:05, 27.4 steps/min]\u001b[92m18:48:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:15,520 - agent.ComputerAgent - INFO - Computer: click({'x': 376, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 376, 'y': 427})\n",
+ " 76%|██████████████████████████████----------| 5550/7340 [201:57<65:08, 27.5 steps/min]\u001b[92m18:48:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:17,342 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 624})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 624})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5552/7340 [202:00<65:03, 27.5 steps/min]\u001b[92m18:48:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:19,539 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 115})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 115})\n",
+ " 76%|██████████████████████████████----------| 5553/7340 [202:02<65:01, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:21,721 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:48:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5553/7340 [202:04<65:01, 27.5 steps/min]\u001b[92m18:48:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:23,040 - agent.ComputerAgent - INFO - Computer: click({'x': 301, 'y': 69})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 301, 'y': 69})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8a9dd85a-f951-495e-aea0-d3864853591e/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:48:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:48:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5553/7340 [202:05<65:02, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:24,321 - agent.ComputerAgent - INFO - Computer: click({'x': 315, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 315, 'y': 153})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:48:24,980 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5554/7340 [202:07<64:59, 27.5 steps/min]2025-08-11 18:48:26,588 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.93s/it]\u001b[92m18:48:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5555/7340 [202:09<64:57, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]27.5 steps/min]\n",
+ "\u001b[92m18:48:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:31,104 - agent.ComputerAgent - INFO - Computer: click({'x': 50, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 50, 'y': 52})\n",
+ " 76%|██████████████████████████████----------| 5555/7340 [202:12<64:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a9dd85a-f951-495e-aea0-d3864853591e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:32,431 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:48:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5556/7340 [202:14<64:56, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:48:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:33,121 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 178})\n",
+ "\u001b[92m18:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:33,780 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 232})\n",
+ "2025-08-11 18:48:34,416 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:48:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:48:35,091 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ " 76%|██████████████████████████████----------| 5556/7340 [202:16<64:57, 27.5 steps/min]\u001b[92m18:48:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:48:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:35,748 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:37,126 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5558/7340 [202:18<64:51, 27.5 steps/min]2025-08-11 18:48:37,789 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:48:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:48:38,483 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:48:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5559/7340 [202:23<64:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5559/7340 [202:24<64:50, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:43,882 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5559/7340 [202:25<64:51, 27.5 steps/min]2025-08-11 18:48:44,573 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:45,241 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:48:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5559/7340 [202:27<64:51, 27.5 steps/min]2025-08-11 18:48:45,916 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:48:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5560/7340 [202:28<64:49, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:47,744 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 178})\n",
+ " 76%|██████████████████████████████----------| 5560/7340 [202:29<64:49, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa800986-7030-4845-b4a1-82119abb97e9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5561/7340 [202:30<64:47, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:49,421 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:48:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:50,818 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 76%|██████████████████████████████----------| 5561/7340 [202:32<64:47, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/aa800986-7030-4845-b4a1-82119abb97e9/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:51,973 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5561/7340 [202:35<64:48, 27.5 steps/min]\u001b[92m18:48:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa800986-7030-4845-b4a1-82119abb97e9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:53,985 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:48:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:48:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:48:54,612 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:48:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:48:55,305 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 728})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 728})\n",
+ " 76%|██████████████████████████████----------| 5561/7340 [202:37<64:49, 27.4 steps/min]\u001b[92m18:48:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:48:55,982 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:48:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:48:56,670 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 125})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:48:57,993 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:48:57,993 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 76%|██████████████████████████████----------| 5564/7340 [202:40<64:41, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5564/7340 [202:42<64:42, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:49:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:49:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 76%|██████████████████████████████----------| 5564/7340 [202:43<64:42, 27.4 steps/min]\u001b[92m18:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:02,548 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 399})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 399})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:49:03,233 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 463, 'y': 213}, {'x': 432, 'y': 282}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 463, 'y': 213}, {'x': 432, 'y': 282}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5564/7340 [202:44<64:42, 27.4 steps/min]2025-08-11 18:49:03,882 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5566/7340 [202:45<64:37, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:05,778 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 76%|██████████████████████████████----------| 5566/7340 [202:47<64:38, 27.4 steps/min]2025-08-11 18:49:06,433 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:49:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5567/7340 [202:48<64:35, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5567/7340 [202:49<64:35, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:09,270 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:10,566 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:49:10,568 - agent.ComputerAgent - INFO - Agent: This isn’t a cache issue. Chrome is being told to open that site by a setting, an extension/policy, or a modified shortcut. Do the steps below in order (quickest first). They’re written for Windows on your Surface Pro.\n",
+ "\n",
+ "1) Fix Chrome’s startup settings\n",
+ "- Open Chrome.\n",
+ "- Menu (three dots) > Settings > On startup.\n",
+ "- Select “Open the New Tab page,” or “Continue where you left off.”\n",
+ "- If “Open a specific page or set of pages” is selected, remove funbrain.com (click the three dots beside it > Remove).\n",
+ "- Also check Settings > Appearance > Show home button. If enabled, set it to “New Tab page” and make sure funbrain.com isn’t listed there.\n",
+ "\n",
+ "2) Unpin any pinned tab\n",
+ "- If you see a tiny pin icon on the funbrain tab, right‑click the tab > Unpin. Close it.\n",
+ "\n",
+ "3) Remove any extension hijacking your homepage/new tab\n",
+ "- Go to chrome://extensions\n",
+ "- Toggle off or Remove any unknown extensions, especially ones referencing “New Tab,” “Home,” “Search,” or anything you don’t recognize.\n",
+ "- Restart Chrome.\n",
+ "\n",
+ "4) Check if Chrome is “Managed by your organization” (policy hijack)\n",
+ "- Go to chrome://policy\n",
+ "- If you see policies like HomepageLocation, NewTabPageLocation, RestoreOnStartup, or StartupURLs pointing to funbrain.com (and this is your personal device), remove the policies:\n",
+ " - Press Windows key, type “regedit”, open Registry Editor.\n",
+ " - Navigate to:\n",
+ " - HKEY_CURRENT_USER\\Software\\Policies\\Google\\Chrome\n",
+ " - HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Google\\Chrome\n",
+ " - Delete values like HomepageLocation, StartupURLs, RestoreOnStartup, NewTabPageLocation, or delete the Chrome key entirely if it only contains unwanted values.\n",
+ " - Easier method (PowerShell as Administrator):\n",
+ " Remove-Item -Path \"HKCU:\\Software\\Policies\\Google\\Chrome\" -Recurse -Force -ErrorAction SilentlyContinue\n",
+ " Remove-Item -Path \"HKLM:\\SOFTWARE\\Policies\\Google\\Chrome\" -Recurse -Force -ErrorAction SilentlyContinue\n",
+ " - Reboot and recheck chrome://policy (it should be empty or unrelated).\n",
+ "\n",
+ "5) Fix the shortcut that launches Chrome\n",
+ "- Right‑click the Chrome shortcut you normall...\n",
+ "INFO:agent.ComputerAgent:Agent: This isn’t a cache issue. Chrome is being told to open that site by a setting, an extension/policy, or a modified shortcut. Do the steps below in order (quickest first). They’re written for Windows on your Surface Pro.\n",
+ "\n",
+ "1) Fix Chrome’s startup settings\n",
+ "- Open Chrome.\n",
+ "- Menu (three dots) > Settings > On startup.\n",
+ "- Select “Open the New Tab page,” or “Continue where you left off.”\n",
+ "- If “Open a specific page or set of pages” is selected, remove funbrain.com (click the three dots beside it > Remove).\n",
+ "- Also check Settings > Appearance > Show home button. If enabled, set it to “New Tab page” and make sure funbrain.com isn’t listed there.\n",
+ "\n",
+ "2) Unpin any pinned tab\n",
+ "- If you see a tiny pin icon on the funbrain tab, right‑click the tab > Unpin. Close it.\n",
+ "\n",
+ "3) Remove any extension hijacking your homepage/new tab\n",
+ "- Go to chrome://extensions\n",
+ "- Toggle off or Remove any unknown extensions, especially ones referencing “New Tab,” “Home,” “Search,” or anything you don’t recognize.\n",
+ "- Restart Chrome.\n",
+ "\n",
+ "4) Check if Chrome is “Managed by your organization” (policy hijack)\n",
+ "- Go to chrome://policy\n",
+ "- If you see policies like HomepageLocation, NewTabPageLocation, RestoreOnStartup, or StartupURLs pointing to funbrain.com (and this is your personal device), remove the policies:\n",
+ " - Press Windows key, type “regedit”, open Registry Editor.\n",
+ " - Navigate to:\n",
+ " - HKEY_CURRENT_USER\\Software\\Policies\\Google\\Chrome\n",
+ " - HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Google\\Chrome\n",
+ " - Delete values like HomepageLocation, StartupURLs, RestoreOnStartup, NewTabPageLocation, or delete the Chrome key entirely if it only contains unwanted values.\n",
+ " - Easier method (PowerShell as Administrator):\n",
+ " Remove-Item -Path \"HKCU:\\Software\\Policies\\Google\\Chrome\" -Recurse -Force -ErrorAction SilentlyContinue\n",
+ " Remove-Item -Path \"HKLM:\\SOFTWARE\\Policies\\Google\\Chrome\" -Recurse -Force -ErrorAction SilentlyContinue\n",
+ " - Reboot and recheck chrome://policy (it should be empty or unrelated).\n",
+ "\n",
+ "5) Fix the shortcut that launches Chrome\n",
+ "- Right‑click the Chrome shortcut you normall...\n",
+ "2025-08-11 18:49:11,237 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 2424\n",
+ " - prompt_tokens: 1953\n",
+ " - total_tokens: 4377\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1600\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0267\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 2424\n",
+ " - prompt_tokens: 1953\n",
+ " - total_tokens: 4377\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1600\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0267\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5568/7340 [202:52<64:34, 27.4 steps/min]2025-08-11 18:49:11,903 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:49:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:49:12,597 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:49:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5568/7340 [202:54<64:34, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:13,963 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 76%|██████████████████████████████----------| 5568/7340 [202:55<64:34, 27.4 steps/min]\u001b[92m18:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:49:14,626 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 18, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 18, 'y': 237})\n",
+ "2025-08-11 18:49:16,032 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:49:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a9dd85a-f951-495e-aea0-d3864853591e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5568/7340 [202:57<64:35, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:17,933 - agent.ComputerAgent - INFO - Computer: type({'text': 'ICLR 2013 location'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ICLR 2013 location'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:19,279 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 76%|██████████████████████████████----------| 5569/7340 [203:01<64:33, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a9dd85a-f951-495e-aea0-d3864853591e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:49:21,241 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:49:21,241 - agent.ComputerAgent - INFO - Agent: Yes. VLC is set by default to reuse the first instance when you open a file (especially from the file manager). Turn that off and you can run multiple, independent players.\n",
+ "\n",
+ "Do this:\n",
+ "- Open VLC.\n",
+ "- Go to Tools -> Preferences (Ctrl+P).\n",
+ "- In the Simple settings view on the Interface tab, find Playlist and Instances.\n",
+ "- Uncheck:\n",
+ " - Allow only one instance\n",
+ " - Use only one instance when started from file manager\n",
+ " - Enqueue items into playlist in one instance (if you see it)\n",
+ "- Click Save.\n",
+ "- Quit all VLC windows completely (Ctrl+Q) so the setting takes effect, then start VLC again and open a different file in each window.\n",
+ "\n",
+ "Command-line alternative (Windows/macOS/Linux):\n",
+ "- Start each player with:\n",
+ " - vlc --no-one-instance --no-one-instance-when-started-from-file-manager file1.mp4\n",
+ " - vlc --no-one-instance file2.mp4\n",
+ "\n",
+ "After this, each VLC window will play its own video simultaneously.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Yes. VLC is set by default to reuse the first instance when you open a file (especially from the file manager). Turn that off and you can run multiple, independent players.\n",
+ "\n",
+ "Do this:\n",
+ "- Open VLC.\n",
+ "- Go to Tools -> Preferences (Ctrl+P).\n",
+ "- In the Simple settings view on the Interface tab, find Playlist and Instances.\n",
+ "- Uncheck:\n",
+ " - Allow only one instance\n",
+ " - Use only one instance when started from file manager\n",
+ " - Enqueue items into playlist in one instance (if you see it)\n",
+ "- Click Save.\n",
+ "- Quit all VLC windows completely (Ctrl+Q) so the setting takes effect, then start VLC again and open a different file in each window.\n",
+ "\n",
+ "Command-line alternative (Windows/macOS/Linux):\n",
+ "- Start each player with:\n",
+ " - vlc --no-one-instance --no-one-instance-when-started-from-file-manager file1.mp4\n",
+ " - vlc --no-one-instance file2.mp4\n",
+ "\n",
+ "After this, each VLC window will play its own video simultaneously.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:49:21,909 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1307\n",
+ " - prompt_tokens: 1975\n",
+ " - total_tokens: 3282\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1088\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0155\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1307\n",
+ " - prompt_tokens: 1975\n",
+ " - total_tokens: 3282\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1088\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0155\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 76%|██████████████████████████████----------| 5590/7340 [203:04<63:34, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:49:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0835b0ab-8369-4209-a85b-955dbb271b54/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:23,196 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:23,915 - agent.ComputerAgent - INFO - Computer: click({'x': 109, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 109, 'y': 125})\n",
+ "\u001b[92m18:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8a9dd85a-f951-495e-aea0-d3864853591e/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/655a0f34-fb5e-49f8-9a65-531af668d6c6/close \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5591/7340 [203:05<63:31, 27.5 steps/min]2025-08-11 18:49:24,590 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 75, 'y': 177}, {'x': 278, 'y': 177}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 75, 'y': 177}, {'x': 278, 'y': 177}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa800986-7030-4845-b4a1-82119abb97e9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5592/7340 [203:07<63:29, 27.5 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5600/7340 [203:08<63:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:28,277 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5600/7340 [203:10<63:07, 27.6 steps/min]2025-08-11 18:49:28,957 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:49:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:49:30,327 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:49:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84265bb9-b6f6-479e-8a58-920cfa2b7c69/close \"HTTP/1.1 200 OK\"\n",
+ " 76%|██████████████████████████████----------| 5600/7340 [203:12<63:08, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa800986-7030-4845-b4a1-82119abb97e9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:32,369 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:49:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:49:33,048 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:49:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5619/7340 [203:14<62:15, 27.6 steps/min]2025-08-11 18:49:33,737 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:49:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.89s/it]27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/aa800986-7030-4845-b4a1-82119abb97e9/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5619/7340 [203:18<62:16, 27.6 steps/min]\u001b[92m18:49:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.52s/it]27.6 steps/min]\n",
+ " 77%|██████████████████████████████----------| 5619/7340 [203:21<62:17, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.55s/it]27.6 steps/min]\n",
+ " 77%|██████████████████████████████----------| 5619/7340 [203:26<62:18, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:46,381 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 240})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 240})\n",
+ " 77%|██████████████████████████████----------| 5620/7340 [203:29<62:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:49:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:48,701 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:49:48,702 - agent.ComputerAgent - INFO - Computer: double_click({'x': 213, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 213, 'y': 137})\n",
+ " 77%|██████████████████████████████----------| 5620/7340 [203:30<62:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:49:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:49:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:49:50,004 - agent.ComputerAgent - INFO - Computer: click({'x': 472, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 472, 'y': 153})\n",
+ " 77%|██████████████████████████████----------| 5622/7340 [203:32<62:12, 27.6 steps/min]\u001b[92m18:49:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:51,653 - agent.ComputerAgent - INFO - Computer: move({'x': 106, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 106, 'y': 125})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:53,373 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m18:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5622/7340 [203:35<62:12, 27.6 steps/min]2025-08-11 18:49:54,053 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 205})\n",
+ "2025-08-11 18:49:54,688 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:49:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5624/7340 [203:36<62:07, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:55,359 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5625/7340 [203:37<62:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:49:57,253 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5625/7340 [203:39<62:05, 27.6 steps/min]\u001b[92m18:49:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:49:58,590 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:49:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:49:59,278 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:49:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5625/7340 [203:41<62:06, 27.6 steps/min]2025-08-11 18:50:00,670 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:50:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5625/7340 [203:42<62:06, 27.6 steps/min]2025-08-11 18:50:01,350 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 211})\n",
+ "2025-08-11 18:50:02,063 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:50:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5625/7340 [203:43<62:06, 27.6 steps/min]2025-08-11 18:50:03,259 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5626/7340 [203:45<62:04, 27.6 steps/min]\u001b[92m18:50:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 77%|██████████████████████████████----------| 5626/7340 [203:46<62:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:50:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 77%|██████████████████████████████----------| 5628/7340 [203:47<61:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/a72854f0-3bb0-4711-a18e-7a467a56390e/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5628/7340 [203:48<61:59, 27.6 steps/min]2025-08-11 18:50:07,809 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:50:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:50:08,493 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 80, 'y': 178}, {'x': 256, 'y': 178}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 80, 'y': 178}, {'x': 256, 'y': 178}]})\n",
+ " 77%|██████████████████████████████----------| 5628/7340 [203:50<62:00, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:51<61:57, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:52<61:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.60s/it]\u001b[92m18:50:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:55<61:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]2025-08-11 18:50:14,763 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:57<61:59, 27.6 steps/min]\u001b[92m18:50:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:58<61:59, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [203:59<62:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:01<62:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b371bc4c-8484-4896-8440-445fc0c258a1/reset \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:02<62:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:03<62:01, 27.6 steps/min]2025-08-11 18:50:21,690 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:04<62:01, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:05<62:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:06<62:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5629/7340 [204:07<62:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:27,814 - agent.ComputerAgent - INFO - Agent: The Python extension by Microsoft has been installed and is enabled in VS Code. You can see the “Disable” and “Uninstall” buttons on the extension page, confirming it’s active. You’re ready to use Python features (IntelliSense, linting, debugging, etc.) in VS Code.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The Python extension by Microsoft has been installed and is enabled in VS Code. You can see the “Disable” and “Uninstall” buttons on the extension page, confirming it’s active. You’re ready to use Python features (IntelliSense, linting, debugging, etc.) in VS Code.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 18:50:28,490 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 456\n",
+ " - prompt_tokens: 11511\n",
+ " - total_tokens: 11967\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0161\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 456\n",
+ " - prompt_tokens: 11511\n",
+ " - total_tokens: 11967\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 2560\n",
+ " - response_cost: $0.0161\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9967e9e1-9446-4465-a911-ca5b69bde420/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:29,790 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:50:29,791 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 77%|██████████████████████████████----------| 5630/7340 [204:11<62:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:50:31,110 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:50:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5630/7340 [204:13<62:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5630/7340 [204:14<62:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:33,841 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:50:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5630/7340 [204:15<62:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5635/7340 [204:16<61:48, 27.6 steps/min]\u001b[92m18:50:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:50:36,060 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:50:36,061 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:37,430 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0ce9d177-2b9a-4fde-a8a5-eb1b59248c8f/close \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:20<61:43, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:21<61:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:50:39,773 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:50:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:24<61:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:43,483 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:50:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:25<61:45, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:26<61:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:50:46,320 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 77%|██████████████████████████████----------| 5637/7340 [204:28<61:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:29<61:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:30<61:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:31<61:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:50:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:32<61:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:33<61:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:50:52,659 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:37<61:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:38<61:46, 27.6 steps/min]2025-08-11 18:50:57,468 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:50:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:01, 1.93s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.71s/it]27.5 steps/min]\n",
+ "\u001b[92m18:50:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:50:59,789 - agent.ComputerAgent - INFO - Computer: click({'x': 22, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 22, 'y': 91})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:01,232 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ "\u001b[92m18:51:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5638/7340 [204:42<61:47, 27.5 steps/min]2025-08-11 18:51:01,932 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 125})\n",
+ "2025-08-11 18:51:02,617 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:51:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5640/7340 [204:44<61:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:03,827 - agent.ComputerAgent - INFO - Computer: click({'x': 654, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 654, 'y': 213})\n",
+ " 77%|██████████████████████████████----------| 5641/7340 [204:45<61:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5642/7340 [204:46<61:37, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:05,686 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 124})\n",
+ " 77%|██████████████████████████████----------| 5642/7340 [204:47<61:38, 27.5 steps/min]\u001b[92m18:51:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5643/7340 [204:48<61:35, 27.6 steps/min]2025-08-11 18:51:07,358 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:51:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:08,030 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:51:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:51:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:08,729 - agent.ComputerAgent - INFO - Computer: click({'x': 288, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 288, 'y': 52})\n",
+ "2025-08-11 18:51:09,362 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:51:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 77%|██████████████████████████████----------| 5643/7340 [204:51<61:36, 27.5 steps/min]\u001b[92m18:51:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:10,733 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:10,733 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 189})\n",
+ "2025-08-11 18:51:11,411 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:51:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:12,743 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 77%|██████████████████████████████----------| 5644/7340 [204:54<61:34, 27.5 steps/min]\u001b[92m18:51:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:13,451 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:51:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:14,152 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:51:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5646/7340 [204:55<61:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:51:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:15,202 - agent.ComputerAgent - INFO - Computer: double_click({'x': 232, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 232, 'y': 105})\n",
+ " 77%|██████████████████████████████----------| 5646/7340 [204:56<61:29, 27.5 steps/min]\u001b[92m18:51:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:15,894 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:15,895 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 16, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 16, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:16,581 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:51:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5647/7340 [204:58<61:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:51:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:17,756 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 148, 'y': 178}, {'x': 256, 'y': 176}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 148, 'y': 178}, {'x': 256, 'y': 176}]})\n",
+ " 77%|██████████████████████████████----------| 5648/7340 [204:59<61:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:18,382 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5649/7340 [205:00<61:22, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:19,712 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:51:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:51:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:20,366 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:20,367 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 402})\n",
+ " 77%|██████████████████████████████----------| 5649/7340 [205:02<61:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:21,687 - agent.ComputerAgent - INFO - Computer: type({'text': 'autoDocstring'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'autoDocstring'})\n",
+ "\u001b[92m18:51:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5650/7340 [205:03<61:20, 27.6 steps/min]2025-08-11 18:51:22,393 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 459, 'y': 214}, {'x': 432, 'y': 281}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 459, 'y': 214}, {'x': 432, 'y': 281}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:24,071 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:51:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:25,423 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5651/7340 [205:07<61:18, 27.5 steps/min]2025-08-11 18:51:26,072 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:51:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:51:26,735 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5653/7340 [205:09<61:13, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:28,065 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:28,066 - agent.ComputerAgent - INFO - Computer: double_click({'x': 987, 'y': 659})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 987, 'y': 659})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:28,724 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:51:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5654/7340 [205:10<61:10, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:29,902 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5657/7340 [205:12<61:03, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1064657b-b89a-4eeb-8197-1c110af6b752/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5657/7340 [205:13<61:03, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:32,885 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:51:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/27fc4825-1617-494a-9308-b128bd8af05a/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:33,562 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:51:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5657/7340 [205:15<61:04, 27.6 steps/min]\u001b[92m18:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:51:34,907 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:51:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:51:35,588 - agent.ComputerAgent - INFO - Computer: click({'x': 321, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 321, 'y': 153})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:51:36,275 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:51:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5657/7340 [205:18<61:04, 27.6 steps/min]2025-08-11 18:51:36,937 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:51:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:39,210 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5658/7340 [205:21<61:02, 27.6 steps/min]2025-08-11 18:51:40,512 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]27.5 steps/min]\n",
+ "2025-08-11 18:51:41,711 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:51:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:51:42,353 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:51:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5658/7340 [205:24<61:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:43,559 - agent.ComputerAgent - INFO - Computer: click({'x': 92, 'y': 231})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 92, 'y': 231})\n",
+ " 77%|██████████████████████████████----------| 5658/7340 [205:25<61:04, 27.5 steps/min]\u001b[92m18:51:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:44,246 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 842, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 842, 'y': 400})\n",
+ "\u001b[92m18:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:44,935 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:44,936 - agent.ComputerAgent - INFO - Computer: click({'x': 805, 'y': 642})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 805, 'y': 642})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5659/7340 [205:27<61:01, 27.5 steps/min]\u001b[92m18:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:46,293 - agent.ComputerAgent - INFO - Computer: click({'x': 589, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 589, 'y': 339})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 77%|██████████████████████████████----------| 5661/7340 [205:28<60:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:47,646 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 117, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 117, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:49,012 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:49,013 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 77%|██████████████████████████████----------| 5662/7340 [205:30<60:54, 27.6 steps/min]2025-08-11 18:51:49,684 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:52,002 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5664/7340 [205:33<60:49, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:52,623 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:51:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:51:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:54,665 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:51:54,666 - agent.ComputerAgent - INFO - Computer: click({'x': 1010, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1010, 'y': 64})\n",
+ "2025-08-11 18:51:55,298 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:51:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:56,643 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5665/7340 [205:39<60:48, 27.5 steps/min]\u001b[92m18:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:51:57,991 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:51:58,638 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 156})\n",
+ "2025-08-11 18:51:59,313 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "\u001b[92m18:51:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 503 Service Unavailable\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5666/7340 [205:41<60:46, 27.5 steps/min]2025-08-11 18:51:59,952 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:51:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:00,620 - agent.ComputerAgent - INFO - Computer: click({'x': 523, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 523, 'y': 178})\n",
+ "INFO:openai._base_client:Retrying request to /chat/completions in 0.457578 seconds\n",
+ " 77%|██████████████████████████████----------| 5668/7340 [205:42<60:40, 27.6 steps/min]2025-08-11 18:52:01,252 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:52:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:43<60:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:02,428 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:52:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:03,112 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/497d5104-1e6e-44a9-a164-fec745a337b6/close \"HTTP/1.1 200 OK\"\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:44<60:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:04,423 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:52:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:46<60:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:05,582 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:52:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:47<60:39, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:06,272 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:06,911 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:52:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:48<60:39, 27.5 steps/min]2025-08-11 18:52:08,113 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:52:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:49<60:40, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:50<60:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:52<60:40, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:52:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:54<60:41, 27.5 steps/min]\u001b[92m18:52:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:55<60:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.57s/it]\u001b[92m18:52:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:58<60:42, 27.5 steps/min]\u001b[92m18:52:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5669/7340 [205:59<60:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:19,490 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:52:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:20,837 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 335})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:21,513 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 289})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 289})\n",
+ " 77%|██████████████████████████████----------| 5678/7340 [206:03<60:18, 27.6 steps/min]\u001b[92m18:52:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:22,194 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 429})\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:22,846 - agent.ComputerAgent - INFO - Computer: click({'x': 642, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 642, 'y': 390})\n",
+ "2025-08-11 18:52:23,530 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 141})\n",
+ "2025-08-11 18:52:24,207 - agent.ComputerAgent - INFO - Computer: click({'x': 234, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 234, 'y': 33})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:24,870 - agent.ComputerAgent - INFO - Computer: click({'x': 853, 'y': 514})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 853, 'y': 514})\n",
+ "2025-08-11 18:52:25,520 - agent.ComputerAgent - INFO - Computer: click({'x': 630, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 630, 'y': 193})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5681/7340 [206:07<60:11, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:26,871 - agent.ComputerAgent - INFO - Computer: click({'x': 758, 'y': 443})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 758, 'y': 443})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:27,535 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 462, 'y': 213}, {'x': 432, 'y': 281}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 462, 'y': 213}, {'x': 432, 'y': 281}]})\n",
+ "\u001b[92m18:52:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 77%|██████████████████████████████----------| 5687/7340 [206:09<59:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:28,177 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ " 78%|███████████████████████████████---------| 5689/7340 [206:10<59:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c1b31663-de2f-4fd6-a091-28bf62a74f86/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:11<59:47, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:13<59:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:14<59:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.56s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:34,370 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:52:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.56s/it]2025-08-11 18:52:35,083 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.30s/it]27.6 steps/min]\n",
+ "2025-08-11 18:52:35,762 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:52:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:36,423 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:52:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:18<59:49, 27.6 steps/min]2025-08-11 18:52:37,251 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:52:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:37,906 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:52:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:38,563 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:52:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:20<59:50, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:39,255 - agent.ComputerAgent - INFO - Computer: click({'x': 83, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 83, 'y': 125})\n",
+ "2025-08-11 18:52:39,915 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:52:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5690/7340 [206:21<59:50, 27.6 steps/min]2025-08-11 18:52:42,077 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:52:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:43,452 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 78%|███████████████████████████████---------| 5691/7340 [206:25<59:48, 27.6 steps/min]2025-08-11 18:52:44,247 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:52:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:45,826 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:52:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:46,469 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:52:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5692/7340 [206:28<59:46, 27.6 steps/min]\u001b[92m18:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5692/7340 [206:29<59:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:48,843 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 339})\n",
+ "\u001b[92m18:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:49,517 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 286})\n",
+ " 78%|███████████████████████████████---------| 5694/7340 [206:32<59:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5694/7340 [206:33<59:42, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:52,383 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:52:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:52:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:53,037 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 188, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 188, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5694/7340 [206:34<59:43, 27.6 steps/min]2025-08-11 18:52:53,672 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:52:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:54,973 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:52:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:52:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5695/7340 [206:37<59:40, 27.6 steps/min]\u001b[92m18:52:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:52:56,315 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 493})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 493})\n",
+ "2025-08-11 18:52:57,007 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:52:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:52:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:52:57,663 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 78%|███████████████████████████████---------| 5695/7340 [206:39<59:41, 27.6 steps/min]\u001b[92m18:52:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:52:58,338 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 624, 'x': 237, 'y': 561})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 624, 'x': 237, 'y': 561})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:52:59,588 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5697/7340 [206:43<59:37, 27.6 steps/min]\u001b[92m18:53:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:53:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:02,654 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:53:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5697/7340 [206:44<59:37, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:03,318 - agent.ComputerAgent - INFO - Computer: click({'x': 515, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 515, 'y': 186})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:04,650 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "2025-08-11 18:53:05,333 - agent.ComputerAgent - INFO - Computer: wait({'x': 693, 'y': 201})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({'x': 693, 'y': 201})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5698/7340 [206:49<59:35, 27.6 steps/min]\u001b[92m18:53:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:07,977 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 141})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:09,356 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:10,025 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5699/7340 [206:52<59:34, 27.5 steps/min]\u001b[92m18:53:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:11,379 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:53:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:53:12,019 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:53:12,021 - agent.ComputerAgent - INFO - Computer: click({'x': 405, 'y': 70})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 405, 'y': 70})\n",
+ "2025-08-11 18:53:12,707 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 213})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 213})\n",
+ "2025-08-11 18:53:13,470 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5700/7340 [206:55<59:32, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:14,784 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:53:15,454 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m18:53:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:16,118 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ " 78%|███████████████████████████████---------| 5703/7340 [206:57<59:24, 27.6 steps/min]\u001b[92m18:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:16,765 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ "2025-08-11 18:53:17,444 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:53:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5704/7340 [206:59<59:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:53:18,655 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:53:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5706/7340 [207:01<59:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:20,346 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5706/7340 [207:02<59:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:22,730 - agent.ComputerAgent - INFO - Computer: type({'text': 'receipt'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'receipt'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5707/7340 [207:04<59:15, 27.6 steps/min]2025-08-11 18:53:23,393 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:53:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:53:24,053 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:53:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:53:24,743 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:53:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:53:25,433 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:53:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5709/7340 [207:07<59:10, 27.6 steps/min]\u001b[92m18:53:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:26,723 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:53:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:28,092 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 351})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5709/7340 [207:09<59:11, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:28,754 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m18:53:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:30,074 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:30,735 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:53:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5710/7340 [207:12<59:09, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:31,429 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "\u001b[92m18:53:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:32,099 - agent.ComputerAgent - INFO - Computer: click({'x': 710, 'y': 623})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 710, 'y': 623})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5715/7340 [207:13<58:55, 27.6 steps/min]2025-08-11 18:53:32,751 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:53:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5717/7340 [207:14<58:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af5d4b08-d761-4bf8-a2c7-7ce16ed98ab9/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5718/7340 [207:15<58:47, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5719/7340 [207:16<58:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:35,554 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m18:53:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:36,224 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:53:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5719/7340 [207:17<58:45, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5719/7340 [207:18<58:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:37,913 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:53:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:38,603 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:53:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5720/7340 [207:21<58:43, 27.6 steps/min]\u001b[92m18:53:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:53:40,634 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:53:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:42,920 - agent.ComputerAgent - INFO - Computer: type({'text': 'maps.google.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'maps.google.com'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5721/7340 [207:24<58:41, 27.6 steps/min]2025-08-11 18:53:43,566 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5723/7340 [207:27<58:36, 27.6 steps/min]2025-08-11 18:53:46,093 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:53:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "\u001b[92m18:53:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5723/7340 [207:30<58:37, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:49,107 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:49,757 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 51})\n",
+ "\u001b[92m18:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5723/7340 [207:31<58:38, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:50,391 - agent.ComputerAgent - INFO - Computer: double_click({'x': 284, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 284, 'y': 101})\n",
+ "\u001b[92m18:53:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/258c9010-cdb1-400f-b018-bddcd76c5664/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:53:51,749 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 390})\n",
+ "2025-08-11 18:53:52,415 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 130})\n",
+ "\u001b[92m18:53:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5726/7340 [207:34<58:30, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:53,087 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 283})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:53:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:55,098 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'PgDn'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'PgDn'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:53:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5729/7340 [207:37<58:23, 27.6 steps/min]2025-08-11 18:53:56,403 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "\u001b[92m18:53:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:53:57,088 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:53:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:53:58,122 - agent.ComputerAgent - INFO - Computer: click({'x': 617, 'y': 214})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 617, 'y': 214})\n",
+ "\u001b[92m18:53:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5730/7340 [207:39<58:20, 27.6 steps/min]2025-08-11 18:53:58,815 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 49, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 49, 'y': 51})\n",
+ "2025-08-11 18:53:59,475 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:53:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5732/7340 [207:42<58:15, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:54:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:01,364 - agent.ComputerAgent - INFO - Computer: click({'x': 226, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 226, 'y': 156})\n",
+ " 78%|███████████████████████████████---------| 5733/7340 [207:43<58:13, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:02,025 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:54:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:02,719 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:54:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5734/7340 [207:44<58:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:04,405 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:54:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5734/7340 [207:46<58:11, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:05,025 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:54:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:05,727 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m18:54:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5735/7340 [207:47<58:09, 27.6 steps/min]2025-08-11 18:54:06,384 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:54:07,085 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:07,726 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:54:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:08,405 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:54:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5736/7340 [207:50<58:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:09,721 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:11,078 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'End'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'End'})\n",
+ " 78%|███████████████████████████████---------| 5736/7340 [207:52<58:07, 27.6 steps/min]2025-08-11 18:54:11,765 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:12,445 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:54:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5738/7340 [207:54<58:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5738/7340 [207:55<58:03, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:14,886 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:54:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:15,595 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:54:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:16,265 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:54:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:54:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5738/7340 [207:58<58:03, 27.6 steps/min]2025-08-11 18:54:17,609 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:54:17,611 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 239})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 239})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:18,660 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:54:18,661 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 96, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 96, 'y': 390})\n",
+ "\u001b[92m18:54:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5738/7340 [208:00<58:04, 27.6 steps/min]2025-08-11 18:54:19,360 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 203})\n",
+ "2025-08-11 18:54:19,996 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:54:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5741/7340 [208:01<57:56, 27.6 steps/min]2025-08-11 18:54:20,685 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:22,022 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ " 78%|███████████████████████████████---------| 5742/7340 [208:03<57:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 78%|███████████████████████████████---------| 5744/7340 [208:04<57:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:23,675 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:54:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:24,325 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 125})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 125})\n",
+ "2025-08-11 18:54:24,955 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5744/7340 [208:07<57:49, 27.6 steps/min]\u001b[92m18:54:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:26,277 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:54:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:54:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:26,962 - agent.ComputerAgent - INFO - Computer: double_click({'x': 245, 'y': 189})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 245, 'y': 189})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5746/7340 [208:09<57:44, 27.6 steps/min]2025-08-11 18:54:28,296 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:54:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:28,984 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:54:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:29,666 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5747/7340 [208:12<57:42, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:30,966 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:54:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:54:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5748/7340 [208:14<57:40, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:33,622 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 193, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 193, 'y': 52})\n",
+ "\u001b[92m18:54:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:54:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:34,285 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:54:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:34,967 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:54:34,968 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 217})\n",
+ "2025-08-11 18:54:35,633 - agent.ComputerAgent - INFO - Computer: click({'x': 640, 'y': 551})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 640, 'y': 551})\n",
+ "\u001b[92m18:54:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5749/7340 [208:17<57:38, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:36,282 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 550})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 550})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:36,956 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:54:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5752/7340 [208:19<57:30, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:54:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:54:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:38,998 - agent.ComputerAgent - INFO - Computer: double_click({'x': 325, 'y': 361})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 325, 'y': 361})\n",
+ "\u001b[92m18:54:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5753/7340 [208:20<57:28, 27.6 steps/min]2025-08-11 18:54:39,677 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 176})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 176})\n",
+ "2025-08-11 18:54:40,346 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:54:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:41,669 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'Ctrl+Enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'Ctrl+Enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5755/7340 [208:24<57:23, 27.6 steps/min]2025-08-11 18:54:42,967 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:54:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:44,239 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 18:54:44,906 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:54:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5756/7340 [208:27<57:22, 27.6 steps/min]2025-08-11 18:54:46,665 - agent.ComputerAgent - INFO - Computer: double_click({'x': 345, 'y': 85})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 345, 'y': 85})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:47,319 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 78%|███████████████████████████████---------| 5757/7340 [208:29<57:19, 27.6 steps/min]2025-08-11 18:54:48,013 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 54})\n",
+ "2025-08-11 18:54:48,677 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:54:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:49,306 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 78%|███████████████████████████████---------| 5758/7340 [208:31<57:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:49,986 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:54:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:54:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/584f1ba5-3dc8-4b11-9242-7100c4e1133e/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5760/7340 [208:32<57:12, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:54:51,336 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:54:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:54:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5760/7340 [208:33<57:12, 27.6 steps/min]2025-08-11 18:54:52,670 - agent.ComputerAgent - INFO - Computer: click({'x': 406, 'y': 282})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 406, 'y': 282})\n",
+ "2025-08-11 18:54:53,345 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:54:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 78%|███████████████████████████████---------| 5760/7340 [208:35<57:12, 27.6 steps/min]2025-08-11 18:54:53,995 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:54:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:54,657 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:54:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:55,325 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5762/7340 [208:37<57:07, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:55,983 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:54:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:54:56,650 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:54:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5762/7340 [208:38<57:08, 27.6 steps/min]2025-08-11 18:54:57,317 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:54:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:54:57,985 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:54:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5762/7340 [208:40<57:08, 27.6 steps/min]\u001b[92m18:54:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:54:59,335 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:54:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<57:09, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 79%|███████████████████████████████---------| 5763/7340 [208:42<57:06, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:02,246 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5764/7340 [208:44<57:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]2025-08-11 18:55:03,777 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:55:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5764/7340 [208:45<57:04, 27.6 steps/min]2025-08-11 18:55:04,454 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:55:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 18:55:05,969 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 79%|███████████████████████████████---------| 5765/7340 [208:47<57:02, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:55:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:08,007 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:55:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5766/7340 [208:51<57:00, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:09,956 - agent.ComputerAgent - INFO - Computer: click({'x': 259, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 259, 'y': 154})\n",
+ "\u001b[92m18:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:10,636 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:12,594 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:14,550 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 271})\n",
+ "2025-08-11 18:55:15,252 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 653, 'x': 991, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 653, 'x': 991, 'y': 400})\n",
+ "2025-08-11 18:55:15,944 - agent.ComputerAgent - INFO - Computer: click({'x': 514, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 514, 'y': 186})\n",
+ "\u001b[92m18:55:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5766/7340 [208:57<57:02, 27.6 steps/min]\u001b[92m18:55:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:16,596 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:55:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:17,240 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 185, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 185, 'y': 53})\n",
+ "2025-08-11 18:55:17,912 - agent.ComputerAgent - INFO - Computer: double_click({'x': 273, 'y': 122})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 273, 'y': 122})\n",
+ "\u001b[92m18:55:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:19,180 - agent.ComputerAgent - INFO - Computer: type({'text': 'T1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'T1'})\n",
+ "\u001b[92m18:55:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:19,868 - agent.ComputerAgent - INFO - Computer: double_click({'x': 331, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 331, 'y': 92})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 79%|███████████████████████████████---------| 5771/7340 [209:01<56:49, 27.6 steps/min]2025-08-11 18:55:20,501 - agent.ComputerAgent - INFO - Computer: double_click({'x': 479, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 479, 'y': 158})\n",
+ "2025-08-11 18:55:21,173 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:22,544 - agent.ComputerAgent - INFO - Agent: I’m ready to fill your MUST_VISIT.xlsx form with each restaurant’s address, website, and phone from Google Maps. However, I don’t have your list of restaurant names. Please provide the restaurant names (one per line), or tell me where the list is stored on this computer so I can open it and proceed.\n",
+ "\n",
+ "Once I have the names, I’ll:\n",
+ "- Search each on Google Maps (Hong Kong).\n",
+ "- Record Address, Contact Number, and Website.\n",
+ "- Enter everything into your open spreadsheet and save it.\n",
+ "\n",
+ "Share the list and I’ll complete the task right away.\n",
+ "INFO:agent.ComputerAgent:Agent: I’m ready to fill your MUST_VISIT.xlsx form with each restaurant’s address, website, and phone from Google Maps. However, I don’t have your list of restaurant names. Please provide the restaurant names (one per line), or tell me where the list is stored on this computer so I can open it and proceed.\n",
+ "\n",
+ "Once I have the names, I’ll:\n",
+ "- Search each on Google Maps (Hong Kong).\n",
+ "- Record Address, Contact Number, and Website.\n",
+ "- Enter everything into your open spreadsheet and save it.\n",
+ "\n",
+ "Share the list and I’ll complete the task right away.\n",
+ "2025-08-11 18:55:23,208 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 959\n",
+ " - prompt_tokens: 8139\n",
+ " - total_tokens: 9098\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 832\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0198\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 959\n",
+ " - prompt_tokens: 8139\n",
+ " - total_tokens: 9098\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 832\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0198\n",
+ " 79%|███████████████████████████████---------| 5778/7340 [209:06<56:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:24,918 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:55:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:25,990 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:55:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:55:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5778/7340 [209:08<56:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:55:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:27,823 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 76})\n",
+ " 79%|███████████████████████████████---------| 5778/7340 [209:09<56:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:29,017 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:55:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5779/7340 [209:10<56:30, 27.6 steps/min]2025-08-11 18:55:30,054 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:55:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:30,737 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:55:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:31,421 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:55:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:32,700 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/xlang-ai/instructor-embedding'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5779/7340 [209:14<56:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:55:33,346 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:55:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:34,374 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:55:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:16<56:26, 27.6 steps/min]2025-08-11 18:55:35,040 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:55:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:35,709 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:55:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:17<56:26, 27.6 steps/min]2025-08-11 18:55:36,339 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:55:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:37,414 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:55:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:19<56:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:38,087 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:55:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:21<56:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:22<56:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:41,739 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:55:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:55:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:23<56:28, 27.6 steps/min]2025-08-11 18:55:42,385 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 217})\n",
+ "2025-08-11 18:55:43,053 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:55:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5781/7340 [209:24<56:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5782/7340 [209:25<56:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0835b0ab-8369-4209-a85b-955dbb271b54/close \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5783/7340 [209:26<56:23, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5783/7340 [209:27<56:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5783/7340 [209:29<56:24, 27.6 steps/min]\u001b[92m18:55:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:55:49,244 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:55:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5783/7340 [209:30<56:24, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5783/7340 [209:31<56:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:51,607 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:55:53,171 - agent.ComputerAgent - INFO - Agent: I’m ready to fill the spreadsheet and already have Google Maps open, but I’m missing your list of restaurant names. That list is required to search each place and capture its address, phone, and website.\n",
+ "\n",
+ "Please provide the restaurant names (one per line) here, or tell me where the list file is stored on this computer so I can open it and proceed.\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]INFO:agent.ComputerAgent:Agent: I’m ready to fill the spreadsheet and already have Google Maps open, but I’m missing your list of restaurant names. That list is required to search each place and capture its address, phone, and website.\n",
+ "\n",
+ "Please provide the restaurant names (one per line) here, or tell me where the list file is stored on this computer so I can open it and proceed.\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]\n",
+ "2025-08-11 18:55:53,843 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1107\n",
+ " - prompt_tokens: 8278\n",
+ " - total_tokens: 9385\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1024\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0214\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1107\n",
+ " - prompt_tokens: 8278\n",
+ " - total_tokens: 9385\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1024\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0214\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/35ee5a0d-d8f9-4419-b253-d394f35fe993/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:55:56,015 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ " 79%|███████████████████████████████---------| 5784/7340 [209:37<56:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:55:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:55:58,623 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:55:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:55:59,978 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 51})\n",
+ "\u001b[92m18:55:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:55:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:56:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:02,004 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 79%|███████████████████████████████---------| 5784/7340 [209:43<56:25, 27.6 steps/min]\u001b[92m18:56:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:03,351 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 87, 'y': 739})\n",
+ "2025-08-11 18:56:04,020 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 56})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 56})\n",
+ "\u001b[92m18:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:56:05,373 - agent.ComputerAgent - INFO - Computer: click({'x': 638, 'y': 372})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 638, 'y': 372})\n",
+ "2025-08-11 18:56:06,044 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 172})\n",
+ "2025-08-11 18:56:06,662 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 174})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 174})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.64s/it]27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.58s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:08,854 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:56:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.32s/it]27.6 steps/min]\n",
+ " 79%|███████████████████████████████---------| 5790/7340 [209:52<56:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:12,285 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:56:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:56:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5790/7340 [209:54<56:11, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:13,585 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:56:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:14,275 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 254})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:15,025 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:56:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5790/7340 [209:56<56:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:15,728 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 53})\n",
+ "2025-08-11 18:56:16,462 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 526})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 526})\n",
+ "2025-08-11 18:56:17,493 - agent.ComputerAgent - INFO - Computer: click({'x': 859, 'y': 80})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 859, 'y': 80})\n",
+ "\u001b[92m18:56:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:18,201 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:56:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5791/7340 [209:59<56:10, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:56:18,909 - agent.ComputerAgent - INFO - Computer: double_click({'x': 324, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 324, 'y': 347})\n",
+ "2025-08-11 18:56:19,585 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:56:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5794/7340 [210:01<56:02, 27.6 steps/min]2025-08-11 18:56:21,155 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:56:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5795/7340 [210:02<56:00, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 79%|███████████████████████████████---------| 5795/7340 [210:03<56:00, 27.6 steps/min]\u001b[92m18:56:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:23,004 - agent.ComputerAgent - INFO - Computer: click({'x': 746, 'y': 41})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 746, 'y': 41})\n",
+ " 79%|███████████████████████████████---------| 5796/7340 [210:05<55:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:25,835 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m18:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5796/7340 [210:07<55:58, 27.6 steps/min]2025-08-11 18:56:26,504 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:56:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:56:27,161 - agent.ComputerAgent - INFO - Computer: click({'x': 286, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 286, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5796/7340 [210:09<55:59, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:28,505 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:56:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:29,851 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 128, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 128, 'y': 741})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/reset \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5797/7340 [210:11<55:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:56:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:31,174 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 141})\n",
+ "2025-08-11 18:56:31,815 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:56:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5798/7340 [210:13<55:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:32,470 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 386})\n",
+ "2025-08-11 18:56:33,116 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5799/7340 [210:14<55:52, 27.6 steps/min]2025-08-11 18:56:33,774 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:56:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5800/7340 [210:16<55:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5800/7340 [210:17<55:50, 27.6 steps/min]\u001b[92m18:56:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:37,144 - agent.ComputerAgent - INFO - Computer: click({'x': 641, 'y': 521})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 641, 'y': 521})\n",
+ "2025-08-11 18:56:37,757 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:56:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5800/7340 [210:19<55:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:38,444 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:56:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:40,461 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5801/7340 [210:22<55:48, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:41,776 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:56:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:56:42,405 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:56:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:43,735 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:44,399 - agent.ComputerAgent - INFO - Computer: click({'x': 130, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 130, 'y': 74})\n",
+ "2025-08-11 18:56:45,025 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:56:46,406 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:56:46,408 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+home'})\n",
+ " 79%|███████████████████████████████---------| 5801/7340 [210:28<55:50, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:47,095 - agent.ComputerAgent - INFO - Computer: double_click({'x': 263, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 263, 'y': 172})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:56:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:49,709 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 79%|███████████████████████████████---------| 5802/7340 [210:31<55:48, 27.6 steps/min]\u001b[92m18:56:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:50,408 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "\u001b[92m18:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:51,080 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 115, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 115, 'y': 34})\n",
+ "2025-08-11 18:56:51,762 - agent.ComputerAgent - INFO - Computer: click({'x': 390, 'y': 76})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 390, 'y': 76})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:56:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:56:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5803/7340 [210:34<55:46, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:56:53,115 - agent.ComputerAgent - INFO - Computer: click({'x': 226, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 226, 'y': 156})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:56:53,739 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:56:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:56:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5806/7340 [210:35<55:38, 27.6 steps/min]2025-08-11 18:56:54,761 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 294})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 294})\n",
+ " 79%|███████████████████████████████---------| 5808/7340 [210:37<55:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:57,583 - agent.ComputerAgent - INFO - Computer: type({'text': 'T1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'T1'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:56:58,888 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+pagedown'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+pagedown'})\n",
+ " 79%|███████████████████████████████---------| 5808/7340 [210:40<55:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:00,075 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:57:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:00,715 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ " 79%|███████████████████████████████---------| 5809/7340 [210:42<55:31, 27.6 steps/min]\u001b[92m18:57:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:02,057 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:57:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:02,738 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m18:57:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:57:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5809/7340 [210:44<55:32, 27.6 steps/min]2025-08-11 18:57:03,397 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:57:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:04,035 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 187})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 187})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5809/7340 [210:46<55:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:05,990 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:57:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:57:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5810/7340 [210:48<55:30, 27.6 steps/min]2025-08-11 18:57:07,316 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:07,967 - agent.ComputerAgent - INFO - Computer: click({'x': 407, 'y': 510})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 407, 'y': 510})\n",
+ "\u001b[92m18:57:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:08,623 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:57:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5810/7340 [210:50<55:31, 27.6 steps/min]2025-08-11 18:57:09,276 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 188, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 188, 'y': 52})\n",
+ "\u001b[92m18:57:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:09,955 - agent.ComputerAgent - INFO - Computer: click({'x': 745, 'y': 372})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 745, 'y': 372})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5811/7340 [210:52<55:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/81e43616-5be3-4846-b466-62247641452b/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:11,249 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:57:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:11,912 - agent.ComputerAgent - INFO - Computer: click({'x': 954, 'y': 303})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 954, 'y': 303})\n",
+ " 79%|███████████████████████████████---------| 5813/7340 [210:53<55:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:13,127 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:57:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5814/7340 [210:54<55:21, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:15,637 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+v'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/reset \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5814/7340 [210:57<55:22, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:16,300 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:16,940 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:57:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5814/7340 [210:58<55:22, 27.6 steps/min]2025-08-11 18:57:17,610 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:57:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:18,946 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:57:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5815/7340 [211:00<55:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:19,627 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:57:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5815/7340 [211:02<55:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:21,585 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:57:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:22,252 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 365})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 365})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5815/7340 [211:05<55:21, 27.5 steps/min]\u001b[92m18:57:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:24,281 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 413})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 413})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:24,970 - agent.ComputerAgent - INFO - Computer: click({'x': 67, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 67, 'y': 193})\n",
+ "\u001b[92m18:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:25,646 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:57:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5816/7340 [211:08<55:19, 27.5 steps/min]\u001b[92m18:57:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:26,959 - agent.ComputerAgent - INFO - Computer: double_click({'x': 420, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 420, 'y': 346})\n",
+ "2025-08-11 18:57:27,646 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 122})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 122})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:28,967 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:57:28,969 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "\u001b[92m18:57:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5818/7340 [211:10<55:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:29,607 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:57:29,608 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 725})\n",
+ "2025-08-11 18:57:30,241 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:57:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5820/7340 [211:11<55:09, 27.6 steps/min]2025-08-11 18:57:30,907 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:57:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5821/7340 [211:13<55:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 79%|███████████████████████████████---------| 5821/7340 [211:14<55:07, 27.6 steps/min]\u001b[92m18:57:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:34,216 - agent.ComputerAgent - INFO - Computer: click({'x': 194, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 194, 'y': 172})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5821/7340 [211:15<55:07, 27.6 steps/min]2025-08-11 18:57:34,886 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:57:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5822/7340 [211:16<55:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:36,699 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:57:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:38,686 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5822/7340 [211:20<55:06, 27.5 steps/min]\u001b[92m18:57:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:39,387 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m18:57:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:40,021 - agent.ComputerAgent - INFO - Computer: click({'x': 940, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 940, 'y': 243})\n",
+ "\u001b[92m18:57:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:40,648 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:57:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 79%|███████████████████████████████---------| 5823/7340 [211:22<55:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:41,307 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:57:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:42,017 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 420, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 420, 'y': 410})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5824/7340 [211:24<55:01, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:43,338 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:57:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:45,349 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+s'})\n",
+ " 79%|███████████████████████████████---------| 5825/7340 [211:27<54:59, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:45,989 - agent.ComputerAgent - INFO - Computer: click({'x': 747, 'y': 40})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 747, 'y': 40})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:57:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:47,314 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ " 79%|███████████████████████████████---------| 5825/7340 [211:29<55:00, 27.5 steps/min]\u001b[92m18:57:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:47,943 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:57:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:48,630 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 626, 'x': 745, 'y': 373})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 626, 'x': 745, 'y': 373})\n",
+ "\u001b[92m18:57:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5827/7340 [211:30<54:55, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:57:49,293 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 141})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 141})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5827/7340 [211:31<54:55, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:57:50,630 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:57:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:51,984 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:53,342 - agent.ComputerAgent - INFO - Computer: keykeypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keykeypress({'keys': 'ctrl+l'})\n",
+ "2025-08-11 18:57:53,343 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: keykeypress\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "\u001b[92m18:57:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:57:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:54,717 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 79%|███████████████████████████████---------| 5828/7340 [211:36<54:53, 27.5 steps/min]2025-08-11 18:57:55,388 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:57:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:57:56,038 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m18:57:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:57:56,709 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:57:58,158 - agent.ComputerAgent - INFO - Computer: type({'text': 'Pass;Fail;Held'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Pass;Fail;Held'})\n",
+ "2025-08-11 18:57:58,836 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:57:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:57:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 79%|███████████████████████████████---------| 5829/7340 [211:41<54:52, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:58:00,943 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+f'})\n",
+ "2025-08-11 18:58:02,024 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:58:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:58:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 79%|███████████████████████████████---------| 5831/7340 [211:43<54:47, 27.5 steps/min]2025-08-11 18:58:02,717 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:58:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:58:03,386 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m18:58:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:58:04,086 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 37})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 37})\n",
+ " 80%|███████████████████████████████---------| 5852/7340 [211:47<53:51, 27.6 steps/min]2025-08-11 18:58:06,253 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:58:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 80%|███████████████████████████████---------| 5852/7340 [211:50<53:51, 27.6 steps/min]\u001b[92m18:58:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:58:08,995 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:58:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:58:09,659 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:58:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]2025-08-11 18:58:11,606 - agent.ComputerAgent - INFO - Computer: type({'text': 'T2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'T2'})\n",
+ " 80%|███████████████████████████████---------| 5853/7340 [211:53<53:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]2025-08-11 18:58:12,961 - agent.ComputerAgent - INFO - Agent: I created a desktop shortcut named “Play Puzzle Game 2048” that opens this site in Chrome. You can double-click it on your Desktop anytime to return here.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: I created a desktop shortcut named “Play Puzzle Game 2048” that opens this site in Chrome. You can double-click it on your Desktop anytime to return here.\n",
+ "\n",
+ "Task completed\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 18:58:13,730 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 685\n",
+ " - prompt_tokens: 8825\n",
+ " - total_tokens: 9510\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0179\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 685\n",
+ " - prompt_tokens: 8825\n",
+ " - total_tokens: 9510\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0179\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]27.6 steps/min]\n",
+ "2025-08-11 18:58:14,358 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:58:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:58:15,040 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m18:58:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:58:16,007 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m18:58:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|███████████████████████████████---------| 5856/7340 [211:57<53:42, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:58:17,199 - agent.ComputerAgent - INFO - Computer: double_click({'x': 987, 'y': 629})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 987, 'y': 629})\n",
+ " 80%|███████████████████████████████---------| 5856/7340 [211:58<53:43, 27.6 steps/min]\u001b[92m18:58:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:58:17,846 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:58:18,538 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m18:58:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|███████████████████████████████---------| 5857/7340 [212:00<53:40, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|███████████████████████████████---------| 5858/7340 [212:02<53:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 80%|███████████████████████████████---------| 5862/7340 [212:04<53:28, 27.6 steps/min]\u001b[92m18:58:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:58:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b371bc4c-8484-4896-8440-445fc0c258a1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:58:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:58:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5872/7340 [212:06<53:01, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:58:26,346 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 118, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 118, 'y': 53})\n",
+ "2025-08-11 18:58:26,990 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 142})\n",
+ "\u001b[92m18:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:58:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:58:27,648 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:58:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5872/7340 [212:09<53:02, 27.7 steps/min]2025-08-11 18:58:28,303 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 503})\n",
+ "2025-08-11 18:58:28,966 - agent.ComputerAgent - INFO - Computer: click({'x': 552, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 552, 'y': 148})\n",
+ "2025-08-11 18:58:29,632 - agent.ComputerAgent - INFO - Computer: click({'x': 954, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 954, 'y': 332})\n",
+ "2025-08-11 18:58:30,258 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5874/7340 [212:12<52:57, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:58:31,579 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 496})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 496})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m18:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5877/7340 [212:13<52:49, 27.7 steps/min]2025-08-11 18:58:32,894 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m18:58:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.80s/it]2025-08-11 18:58:34,298 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m18:58:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5878/7340 [212:16<52:47, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:58:35,218 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m18:58:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/164ae77e-28f5-4055-a531-e741b8ebd2d8/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.64s/it]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5878/7340 [212:18<52:48, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 18:58:38,035 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:58:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5878/7340 [212:19<52:48, 27.7 steps/min]2025-08-11 18:58:38,696 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m18:58:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:58:40,078 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:58:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5878/7340 [212:21<52:49, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:58:40,748 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m18:58:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 18:58:41,398 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:58:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5879/7340 [212:23<52:46, 27.7 steps/min]2025-08-11 18:58:42,037 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m18:58:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.96s/it]27.7 steps/min]2025-08-11 18:58:43,238 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:58:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:58:44,131 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.75s/it]INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m18:58:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5879/7340 [212:25<52:47, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.75s/it]\u001b[92m18:58:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5879/7340 [212:27<52:47, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.52s/it]\n",
+ " 80%|████████████████████████████████--------| 5879/7340 [212:28<52:48, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:58:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5879/7340 [212:30<52:48, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:58:50,143 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m18:58:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5880/7340 [212:32<52:46, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:58:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5881/7340 [212:34<52:44, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:58:53,690 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m18:58:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5881/7340 [212:35<52:44, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:58:54,840 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m18:58:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5881/7340 [212:39<52:45, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5882/7340 [212:40<52:43, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:00,060 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m18:59:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5882/7340 [212:41<52:43, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:01,467 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5882/7340 [212:43<52:43, 27.7 steps/min]\u001b[92m18:59:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:03,436 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+p'})\n",
+ "2025-08-11 18:59:04,100 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5882/7340 [212:46<52:44, 27.6 steps/min]\u001b[92m18:59:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:59:05,409 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:59:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5883/7340 [212:47<52:42, 27.6 steps/min]\u001b[92m18:59:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:59:07,098 - agent.ComputerAgent - INFO - Computer: click({'x': 555, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 555, 'y': 446})\n",
+ " 80%|████████████████████████████████--------| 5883/7340 [212:48<52:42, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:08,648 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m18:59:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5884/7340 [212:50<52:40, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5884/7340 [212:54<52:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5885/7340 [212:55<52:38, 27.6 steps/min]2025-08-11 18:59:14,359 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m18:59:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m18:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5885/7340 [212:57<52:39, 27.6 steps/min]\u001b[92m18:59:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:16,345 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m18:59:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:59:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5885/7340 [212:58<52:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5885/7340 [213:01<52:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5886/7340 [213:02<52:37, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:59:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:59:22,400 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m18:59:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5886/7340 [213:04<52:38, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5886/7340 [213:06<52:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5887/7340 [213:08<52:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:27,661 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m18:59:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5887/7340 [213:09<52:36, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:59:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5887/7340 [213:11<52:37, 27.6 steps/min]\u001b[92m18:59:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:59:30,794 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 637})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 637})\n",
+ " 80%|████████████████████████████████--------| 5887/7340 [213:12<52:37, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5889/7340 [213:13<52:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:33,119 - agent.ComputerAgent - INFO - Computer: type({'text': 'Pass\\nFail\\nHeld'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Pass\\nFail\\nHeld'})\n",
+ " 80%|████████████████████████████████--------| 5889/7340 [213:14<52:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:15<52:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:16<52:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:17<52:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/88fabe35-b20f-4415-a671-a5660eca5719/reset \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:18<52:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:38,395 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m18:59:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:20<52:31, 27.6 steps/min]2025-08-11 18:59:39,051 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m18:59:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:39,711 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:59:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:21<52:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5890/7340 [213:24<52:32, 27.6 steps/min]\u001b[92m18:59:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:59:43,417 - agent.ComputerAgent - INFO - Computer: click({'x': 405, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 405, 'y': 104})\n",
+ " 80%|████████████████████████████████--------| 5891/7340 [213:26<52:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b099bebb-084c-441f-a8aa-409b847efc75/reset \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5891/7340 [213:28<52:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:59:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 18:59:48,481 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 18:59:48,481 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 80%|████████████████████████████████--------| 5891/7340 [213:30<52:30, 27.6 steps/min]\u001b[92m18:59:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:59:49,404 - agent.ComputerAgent - INFO - Computer: click({'x': 221, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 221, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5892/7340 [213:31<52:28, 27.6 steps/min]2025-08-11 18:59:50,027 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m18:59:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 18:59:50,721 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m18:59:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5893/7340 [213:32<52:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5893/7340 [213:33<52:26, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m18:59:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5893/7340 [213:34<52:26, 27.6 steps/min]\u001b[92m18:59:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 18:59:54,103 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': -686, 'x': 523, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': -686, 'x': 523, 'y': 420})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e4f57ce5-1be8-466a-acda-67c54fe89cc0/close \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5894/7340 [213:35<52:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 18:59:55,764 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m18:59:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5894/7340 [213:37<52:24, 27.6 steps/min]2025-08-11 18:59:56,822 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m18:59:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 80%|████████████████████████████████--------| 5894/7340 [213:38<52:24, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 18:59:57,513 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m18:59:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5894/7340 [213:39<52:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m18:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 80%|████████████████████████████████--------| 5894/7340 [213:40<52:25, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m18:59:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 18:59:59,740 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:06, 2.17s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:01,948 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m19:00:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5896/7340 [213:43<52:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5897/7340 [213:46<52:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.79s/it]\u001b[92m19:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.52s/it]\n",
+ "\u001b[92m19:00:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:00:06,523 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ " 80%|████████████████████████████████--------| 5897/7340 [213:48<52:19, 27.6 steps/min]\u001b[92m19:00:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:07,193 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:00:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5898/7340 [213:51<52:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:10,602 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m19:00:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5898/7340 [213:52<52:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 80%|████████████████████████████████--------| 5898/7340 [213:53<52:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5899/7340 [213:55<52:15, 27.6 steps/min]\u001b[92m19:00:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:00:14,023 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:14,685 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m19:00:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5900/7340 [213:56<52:12, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5901/7340 [213:59<52:10, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:18,425 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m19:00:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5901/7340 [214:00<52:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:00:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5902/7340 [214:01<52:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:20,741 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ " 80%|████████████████████████████████--------| 5902/7340 [214:02<52:09, 27.6 steps/min]\u001b[92m19:00:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:21,935 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m19:00:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5902/7340 [214:03<52:09, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5902/7340 [214:04<52:09, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5903/7340 [214:06<52:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:25,675 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m19:00:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5903/7340 [214:08<52:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5904/7340 [214:10<52:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:29,916 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m19:00:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5904/7340 [214:11<52:05, 27.6 steps/min]\u001b[92m19:00:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:00:30,788 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:00:30,790 - agent.ComputerAgent - INFO - Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 16, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:32,147 - agent.ComputerAgent - INFO - Computer: type({'text': 'Tim Ho Wan Hong Kong'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Tim Ho Wan Hong Kong'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 80%|████████████████████████████████--------| 5905/7340 [214:13<52:03, 27.6 steps/min]\u001b[92m19:00:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:00:32,798 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:00:32,799 - agent.ComputerAgent - INFO - Computer: click({'x': 330, 'y': 171})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 330, 'y': 171})\n",
+ " 80%|████████████████████████████████--------| 5908/7340 [214:15<51:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:35,467 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:00:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5908/7340 [214:17<51:56, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 80%|████████████████████████████████--------| 5908/7340 [214:18<51:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 81%|████████████████████████████████--------| 5909/7340 [214:19<51:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:38,640 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:00:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5909/7340 [214:20<51:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:00:39,326 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:00:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:40,019 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:00:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5909/7340 [214:21<51:54, 27.6 steps/min]2025-08-11 19:00:40,657 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:00:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5909/7340 [214:22<51:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 81%|████████████████████████████████--------| 5910/7340 [214:24<51:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:43,866 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:00:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5910/7340 [214:26<51:53, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:00:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5911/7340 [214:27<51:50, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5911/7340 [214:28<51:51, 27.6 steps/min]\u001b[92m19:00:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:00:47,767 - agent.ComputerAgent - INFO - Computer: move({'x': 13, 'y': 768})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 13, 'y': 768})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:48,436 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:00:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5911/7340 [214:30<51:51, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:00:50,030 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 81%|████████████████████████████████--------| 5913/7340 [214:31<51:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:00:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5914/7340 [214:32<51:43, 27.6 steps/min]\u001b[92m19:00:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:00:51,843 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 140})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 140})\n",
+ " 81%|████████████████████████████████--------| 5914/7340 [214:33<51:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:35<51:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:36<51:42, 27.6 steps/min]2025-08-11 19:00:55,566 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:00:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:37<51:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:00:56,726 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:00:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:38<51:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:39<51:42, 27.6 steps/min]2025-08-11 19:00:58,935 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:00:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:41<51:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:42<51:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:01:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:01,792 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 77})\n",
+ " 81%|████████████████████████████████--------| 5915/7340 [214:43<51:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5916/7340 [214:48<51:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:01:08,197 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:01:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5916/7340 [214:49<51:42, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5916/7340 [214:50<51:42, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5921/7340 [214:51<51:29, 27.6 steps/min]\u001b[92m19:01:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:11,058 - agent.ComputerAgent - INFO - Computer: double_click({'x': 420, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 420, 'y': 346})\n",
+ " 81%|████████████████████████████████--------| 5921/7340 [214:52<51:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/267c6e03-a37f-4931-a4ba-8633b41aa3e5/close \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5922/7340 [214:53<51:27, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5922/7340 [214:57<51:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1e0da601-e961-4fe5-ac6b-06f530294395/reset \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5922/7340 [214:58<51:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:01:18,409 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:01:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5922/7340 [215:00<51:28, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:19,087 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:01:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:01:19,801 - agent.ComputerAgent - INFO - Computer: click({'x': 954, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 954, 'y': 331})\n",
+ " 81%|████████████████████████████████--------| 5923/7340 [215:06<51:27, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<51:27, 27.5 steps/min]2025-08-11 19:01:26,779 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:01:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5923/7340 [215:09<51:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:08, 2.70s/it]2025-08-11 19:01:29,348 - agent.ComputerAgent - INFO - Computer: type({'text': ' tim ho wan central hong kong'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' tim ho wan central hong kong'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:02, 2.02s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.74s/it]\n",
+ "2025-08-11 19:01:33,798 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:01:33,799 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5924/7340 [215:16<51:27, 27.5 steps/min]\u001b[92m19:01:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:01:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:01:35,375 - agent.ComputerAgent - INFO - Computer: click({'x': 786, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 786, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5925/7340 [215:17<51:24, 27.5 steps/min]2025-08-11 19:01:36,517 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:01:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5926/7340 [215:18<51:22, 27.5 steps/min]\u001b[92m19:01:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:37,712 - agent.ComputerAgent - INFO - Computer: click({'x': 816, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 816, 'y': 241})\n",
+ " 81%|████████████████████████████████--------| 5926/7340 [215:19<51:22, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5927/7340 [215:21<51:20, 27.5 steps/min]\u001b[92m19:01:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:01:41,060 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 212, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 212, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5927/7340 [215:22<51:20, 27.5 steps/min]2025-08-11 19:01:41,699 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:01:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:01:42,377 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:01:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5928/7340 [215:24<51:18, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5928/7340 [215:25<51:18, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:44,038 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:01:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5928/7340 [215:26<51:18, 27.5 steps/min]\u001b[92m19:01:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:45,194 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ " 81%|████████████████████████████████--------| 5929/7340 [215:29<51:16, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:01:48,379 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:01:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5929/7340 [215:31<51:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5929/7340 [215:32<51:17, 27.5 steps/min]2025-08-11 19:01:51,087 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:01:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5929/7340 [215:34<51:18, 27.5 steps/min]\u001b[92m19:01:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:01:53,295 - agent.ComputerAgent - INFO - Computer: double_click({'x': 203, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 203, 'y': 131})\n",
+ " 81%|████████████████████████████████--------| 5929/7340 [215:35<51:18, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:01:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:01:56,312 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:38<51:16, 27.5 steps/min]2025-08-11 19:01:57,487 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:39<51:16, 27.5 steps/min]\u001b[92m19:01:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:40<51:17, 27.5 steps/min]\u001b[92m19:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:01:59,807 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:01:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:42<51:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:44<51:17, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5930/7340 [215:46<51:18, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:06,522 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5950/7340 [215:48<50:25, 27.6 steps/min]\u001b[92m19:02:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:07,860 - agent.ComputerAgent - INFO - Computer: click({'x': 247, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 247, 'y': 155})\n",
+ "\u001b[92m19:02:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:08,501 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:02:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:02:09,169 - agent.ComputerAgent - INFO - Computer: double_click({'x': 324, 'y': 347})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 324, 'y': 347})\n",
+ " 81%|████████████████████████████████--------| 5950/7340 [215:50<50:25, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5952/7340 [215:56<50:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.56s/it]\n",
+ "2025-08-11 19:02:15,154 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:02:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:16,830 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5952/7340 [215:59<50:22, 27.6 steps/min]\u001b[92m19:02:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:18,560 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:19,218 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5952/7340 [216:01<50:22, 27.6 steps/min]\u001b[92m19:02:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:20,535 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 429})\n",
+ "\u001b[92m19:02:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:21,203 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "\u001b[92m19:02:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5953/7340 [216:02<50:20, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:21,872 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "2025-08-11 19:02:22,556 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:02:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5955/7340 [216:04<50:15, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:23,209 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 249, 'y': 412})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 249, 'y': 412})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5957/7340 [216:06<50:10, 27.6 steps/min]\u001b[92m19:02:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:25,897 - agent.ComputerAgent - INFO - Computer: click({'x': 630, 'y': 550})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 630, 'y': 550})\n",
+ " 81%|████████████████████████████████--------| 5957/7340 [216:07<50:10, 27.6 steps/min]\u001b[92m19:02:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:26,580 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 723, 'y': 403})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 723, 'y': 403})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:27,278 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:02:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5958/7340 [216:09<50:08, 27.6 steps/min]2025-08-11 19:02:28,344 - agent.ComputerAgent - INFO - Computer: click({'x': 523, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 523, 'y': 89})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:29,749 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 81%|████████████████████████████████--------| 5959/7340 [216:11<50:06, 27.6 steps/min]\u001b[92m19:02:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:30,408 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:02:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:31,094 - agent.ComputerAgent - INFO - Computer: double_click({'x': 203, 'y': 151})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 203, 'y': 151})\n",
+ "\u001b[92m19:02:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:32,399 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 19:02:33,059 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ " 81%|████████████████████████████████--------| 5960/7340 [216:14<50:04, 27.6 steps/min]\u001b[92m19:02:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:33,734 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 232})\n",
+ "2025-08-11 19:02:34,399 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:02:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:35,099 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:02:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:35,799 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:02:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:36,473 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 244})\n",
+ "2025-08-11 19:02:37,117 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 104})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 104})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5961/7340 [216:18<50:02, 27.6 steps/min]2025-08-11 19:02:37,799 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 147})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 147})\n",
+ " 81%|████████████████████████████████--------| 5964/7340 [216:19<49:54, 27.6 steps/min]2025-08-11 19:02:39,477 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:02:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5965/7340 [216:21<49:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:40,178 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5965/7340 [216:22<49:52, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:02:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:42,392 - agent.ComputerAgent - INFO - Computer: click({'x': 954, 'y': 303})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 954, 'y': 303})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:02:43,731 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 81%|████████████████████████████████--------| 5965/7340 [216:25<49:53, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:45,030 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5966/7340 [216:26<49:50, 27.6 steps/min]2025-08-11 19:02:45,676 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:46,370 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:02:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:47,069 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 571})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 571})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5966/7340 [216:29<49:51, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:02:48,429 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:02:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:49,098 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:02:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5967/7340 [216:30<49:49, 27.6 steps/min]2025-08-11 19:02:49,748 - agent.ComputerAgent - INFO - Computer: double_click({'x': 351, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 351, 'y': 93})\n",
+ "2025-08-11 19:02:50,801 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:02:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5967/7340 [216:32<49:49, 27.6 steps/min]2025-08-11 19:02:51,469 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:02:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:52,158 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:02:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 81%|████████████████████████████████--------| 5968/7340 [216:33<49:47, 27.6 steps/min]2025-08-11 19:02:52,849 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:02:54,212 - agent.ComputerAgent - INFO - Computer: type({'text': 'T1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'T1'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5968/7340 [216:35<49:47, 27.6 steps/min]2025-08-11 19:02:54,907 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:02:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:02:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:56,269 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ " 81%|████████████████████████████████--------| 5969/7340 [216:38<49:45, 27.6 steps/min]\u001b[92m19:02:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:02:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:02:57,482 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 249, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 249, 'y': 219})\n",
+ " 81%|████████████████████████████████--------| 5970/7340 [216:40<49:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:02:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5970/7340 [216:41<49:43, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:01,039 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:03:01,041 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ "\u001b[92m19:03:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5970/7340 [216:42<49:43, 27.5 steps/min]2025-08-11 19:03:01,748 - agent.ComputerAgent - INFO - Computer: click({'x': 262, 'y': 180})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 262, 'y': 180})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:03,089 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 81%|████████████████████████████████--------| 5970/7340 [216:44<49:44, 27.5 steps/min]\u001b[92m19:03:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:03,780 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:03:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:04,482 - agent.ComputerAgent - INFO - Computer: click({'x': 86, 'y': 271})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 86, 'y': 271})\n",
+ "2025-08-11 19:03:05,905 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:03:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5971/7340 [216:48<49:42, 27.5 steps/min]\u001b[92m19:03:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:07,979 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~\\nconvert OIP.jpg Desktop/receipt.pdf\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~\\nconvert OIP.jpg Desktop/receipt.pdf\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:09,351 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:11,291 - agent.ComputerAgent - INFO - Computer: type({'text': 'NBA Store women section'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'NBA Store women section'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5972/7340 [216:53<49:41, 27.5 steps/min]\u001b[92m19:03:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:13,306 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n arr[j+1] = arr[j]'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n arr[j+1] = arr[j]'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:14,610 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 281})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 281})\n",
+ "\u001b[92m19:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5975/7340 [216:56<49:33, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:15,290 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:03:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:03:15,948 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 573, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 573, 'y': 248})\n",
+ "2025-08-11 19:03:16,628 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 77})\n",
+ "\u001b[92m19:03:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/reset \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5977/7340 [216:58<49:28, 27.5 steps/min]2025-08-11 19:03:17,298 - agent.ComputerAgent - INFO - Computer: click({'x': 588, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 588, 'y': 133})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5979/7340 [216:59<49:23, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:18,633 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:03:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5980/7340 [217:01<49:21, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:19,978 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:21,265 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5980/7340 [217:04<49:22, 27.5 steps/min]\u001b[92m19:03:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:23,302 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:03:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:23,979 - agent.ComputerAgent - INFO - Computer: click({'x': 275, 'y': 130})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 275, 'y': 130})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:03:24,650 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:03:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:03:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ " 81%|████████████████████████████████--------| 5981/7340 [217:06<49:19, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:25,363 - agent.ComputerAgent - INFO - Computer: double_click({'x': 324, 'y': 348})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 324, 'y': 348})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:26,018 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:03:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:03:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 81%|████████████████████████████████--------| 5982/7340 [217:07<49:17, 27.6 steps/min]2025-08-11 19:03:26,696 - agent.ComputerAgent - INFO - Computer: double_click({'x': 212, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 212, 'y': 244})\n",
+ "2025-08-11 19:03:27,369 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:03:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 5983/7340 [217:09<49:15, 27.6 steps/min]2025-08-11 19:03:28,040 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:03:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:03:28,730 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:03:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:10<49:12, 27.6 steps/min]2025-08-11 19:03:29,398 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:03:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:30,090 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:03:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:11<49:13, 27.6 steps/min]2025-08-11 19:03:31,187 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:03:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:12<49:13, 27.5 steps/min]2025-08-11 19:03:31,848 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:03:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:03:32,551 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:03:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:03:33,200 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:03:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:15<49:13, 27.5 steps/min]\u001b[92m19:03:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:34,530 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:03:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:03:35,172 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:03:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:03:35,850 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:03:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:03:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:17<49:14, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:36,523 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 764})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 764})\n",
+ " 82%|████████████████████████████████--------| 5984/7340 [217:18<49:14, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:39,059 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 82%|████████████████████████████████--------| 5985/7340 [217:20<49:12, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:40,413 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 656, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 656, 'y': 564})\n",
+ " 82%|████████████████████████████████--------| 5986/7340 [217:22<49:10, 27.5 steps/min]\u001b[92m19:03:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:41,093 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 603})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 603})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5988/7340 [217:24<49:05, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5988/7340 [217:25<49:05, 27.5 steps/min]2025-08-11 19:03:43,750 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:03:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c9e30e96-8d94-49de-8571-8c908e9d1660/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 5988/7340 [217:26<49:05, 27.5 steps/min]\u001b[92m19:03:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:03:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 82%|████████████████████████████████--------| 5988/7340 [217:28<49:06, 27.5 steps/min]\u001b[92m19:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:47,715 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:03:47,716 - agent.ComputerAgent - INFO - Computer: click({'x': 93, 'y': 264})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 93, 'y': 264})\n",
+ "\u001b[92m19:03:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:03:48,381 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:03:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:03:49,952 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 538, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 538, 'y': 248})\n",
+ " 82%|████████████████████████████████--------| 5988/7340 [217:31<49:06, 27.5 steps/min]2025-08-11 19:03:50,630 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:03:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]\u001b[92m19:03:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 5990/7340 [217:33<49:02, 27.5 steps/min]2025-08-11 19:03:52,661 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:03:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 19:03:54,742 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:03:54,743 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 82%|████████████████████████████████--------| 5990/7340 [217:36<49:02, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:03:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:56,101 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 82%|████████████████████████████████--------| 5990/7340 [217:38<49:03, 27.5 steps/min]\u001b[92m19:03:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:57,420 - agent.ComputerAgent - INFO - Computer: click({'x': 289, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 289, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:03:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:03:58,092 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:03:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:03:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:00,019 - agent.ComputerAgent - INFO - Computer: click({'x': 322, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 322, 'y': 181})\n",
+ "\u001b[92m19:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 5990/7340 [217:41<49:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:04:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:00,705 - agent.ComputerAgent - INFO - Computer: click({'x': 635, 'y': 529})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 635, 'y': 529})\n",
+ "2025-08-11 19:04:01,373 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 156})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 156})\n",
+ "2025-08-11 19:04:01,999 - agent.ComputerAgent - INFO - Computer: click({'x': 420, 'y': 106})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 420, 'y': 106})\n",
+ "2025-08-11 19:04:02,666 - agent.ComputerAgent - INFO - Computer: click({'x': 576, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 576, 'y': 142})\n",
+ "\u001b[92m19:04:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:04,706 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 82%|████████████████████████████████--------| 5993/7340 [217:46<48:56, 27.5 steps/min]2025-08-11 19:04:05,341 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:04:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:06,004 - agent.ComputerAgent - INFO - Computer: click({'x': 958, 'y': 357})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 958, 'y': 357})\n",
+ "2025-08-11 19:04:06,689 - agent.ComputerAgent - INFO - Computer: click({'x': 528, 'y': 212})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 528, 'y': 212})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:04:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 5997/7340 [217:49<48:46, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:07,991 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 388})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 388})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 6000/7340 [217:50<48:39, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:04:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:09,290 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:04:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:04:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:10,366 - agent.ComputerAgent - INFO - Computer: double_click({'x': 353, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 353, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 6000/7340 [217:52<48:39, 27.5 steps/min]\u001b[92m19:04:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:11,643 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 650}, {'x': 530, 'y': 368}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 650}, {'x': 530, 'y': 368}]})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 82%|████████████████████████████████--------| 6001/7340 [217:53<48:37, 27.5 steps/min]\u001b[92m19:04:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:12,819 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 692})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 692})\n",
+ " 82%|████████████████████████████████--------| 6002/7340 [217:54<48:34, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:14,357 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:04:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:15,724 - agent.ComputerAgent - INFO - Computer: type({'text': 'Total Revenue'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Total Revenue'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6012/7340 [217:58<48:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:17,082 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:04:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:17,773 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:04:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:04:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:19,152 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6013/7340 [218:00<48:06, 27.6 steps/min]2025-08-11 19:04:19,820 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 206})\n",
+ "2025-08-11 19:04:20,478 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:04:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:21,133 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:04:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:21,823 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ " 82%|████████████████████████████████--------| 6013/7340 [218:03<48:07, 27.6 steps/min]\u001b[92m19:04:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:22,483 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:04:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:23,161 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:04:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 6014/7340 [218:04<48:05, 27.6 steps/min]2025-08-11 19:04:23,823 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:04:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:25,116 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:04:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8d1181fe-61f7-480d-a6d8-34c3f5138d67/close \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6014/7340 [218:06<48:05, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:26,442 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:04:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:27,082 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:04:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:04:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 6014/7340 [218:08<48:05, 27.6 steps/min]2025-08-11 19:04:27,758 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 544, 'y': 248})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 544, 'y': 248})\n",
+ "2025-08-11 19:04:28,434 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:04:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6015/7340 [218:10<48:03, 27.6 steps/min]2025-08-11 19:04:29,126 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:04:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:04:29,827 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:04:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 6015/7340 [218:11<48:03, 27.6 steps/min]2025-08-11 19:04:30,533 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:04:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<48:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:38,063 - agent.ComputerAgent - INFO - Agent: I opened Chrome Settings and set the default Font size to the largest option: Very large.\n",
+ "\n",
+ "Optional for even easier reading: increase Page zoom or set a higher Minimum font size under Customize fonts.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I opened Chrome Settings and set the default Font size to the largest option: Very large.\n",
+ "\n",
+ "Optional for even easier reading: increase Page zoom or set a higher Minimum font size under Customize fonts.\n",
+ "\n",
+ "Task completed.\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.67s/it]2025-08-11 19:04:38,715 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 562\n",
+ " - prompt_tokens: 6724\n",
+ " - total_tokens: 7286\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6016\n",
+ " - response_cost: $0.0073\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 562\n",
+ " - prompt_tokens: 6724\n",
+ " - total_tokens: 7286\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6016\n",
+ " - response_cost: $0.0073\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 6016/7340 [218:22<48:03, 27.5 steps/min]\u001b[92m19:04:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.58s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.33s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 82%|████████████████████████████████--------| 6016/7340 [218:25<48:04, 27.5 steps/min]\u001b[92m19:04:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:44,694 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 53})\n",
+ "\u001b[92m19:04:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:45,369 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:46,680 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 82%|████████████████████████████████--------| 6016/7340 [218:28<48:04, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:47,370 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "\u001b[92m19:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:48,012 - agent.ComputerAgent - INFO - Computer: click({'x': 109, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 109, 'y': 77})\n",
+ "2025-08-11 19:04:48,686 - agent.ComputerAgent - INFO - Computer: click({'x': 473, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 473, 'y': 195})\n",
+ "2025-08-11 19:04:49,372 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 143})\n",
+ " 82%|████████████████████████████████--------| 6018/7340 [218:31<48:00, 27.5 steps/min]\u001b[92m19:04:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:50,708 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 651}, {'x': 520, 'y': 399}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 651}, {'x': 520, 'y': 399}]})\n",
+ " 82%|████████████████████████████████--------| 6022/7340 [218:32<47:49, 27.6 steps/min]\u001b[92m19:04:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:51,386 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 335})\n",
+ "2025-08-11 19:04:52,053 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 82%|████████████████████████████████--------| 6023/7340 [218:33<47:47, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 82%|████████████████████████████████--------| 6024/7340 [218:34<47:45, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:53,891 - agent.ComputerAgent - INFO - Computer: drag({'start_element_description': 'Bottom-right zoom slider handle', 'end_element_description': 'Bottom-right zoom bar around 130%', 'x': 944, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'start_element_description': 'Bottom-right zoom slider handle', 'end_element_description': 'Bottom-right zoom bar around 130%', 'x': 944, 'y': 760})\n",
+ "\u001b[92m19:04:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:54,585 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 124, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 124, 'y': 165})\n",
+ " 82%|████████████████████████████████--------| 6025/7340 [218:36<47:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:04:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:04:55,929 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:04:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b099bebb-084c-441f-a8aa-409b847efc75/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6043/7340 [218:37<46:55, 27.6 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:04:56,561 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:04:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:57,863 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:04:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 82%|████████████████████████████████--------| 6055/7340 [218:40<46:24, 27.7 steps/min]\u001b[92m19:04:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:04:59,248 - agent.ComputerAgent - INFO - Computer: click({'x': 641, 'y': 534})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 641, 'y': 534})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:04:59,925 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:04:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:05:00,602 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:05:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6056/7340 [218:43<46:22, 27.7 steps/min]\u001b[92m19:05:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.80s/it]\u001b[92m19:05:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:05:02,784 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:05:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]2025-08-11 19:05:03,684 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:05:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:45<46:20, 27.7 steps/min]2025-08-11 19:05:04,353 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:05:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:05:05,259 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.70s/it]INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:05:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "\u001b[92m19:05:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/88fabe35-b20f-4415-a671-a5660eca5719/close \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:47<46:20, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:05:06,661 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:05:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:49<46:21, 27.7 steps/min]\u001b[92m19:05:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:50<46:21, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:05:09,656 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:05:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:51<46:21, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:05:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:53<46:21, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:05:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:05:12,015 - agent.ComputerAgent - INFO - Computer: click({'x': 532, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 532, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:05:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6057/7340 [218:54<46:22, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6058/7340 [218:55<46:19, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]\u001b[92m19:05:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6058/7340 [218:56<46:19, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.27s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "2025-08-11 19:05:18,324 - agent.ComputerAgent - INFO - Computer: type({'text': 'Total Expenses'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Total Expenses'})\n",
+ " 83%|█████████████████████████████████-------| 6058/7340 [219:00<46:20, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 83%|█████████████████████████████████-------| 6059/7340 [219:01<46:18, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:05:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:05:20,824 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:05:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6059/7340 [219:02<46:18, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6059/7340 [219:05<46:19, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:05:25,155 - agent.ComputerAgent - INFO - Agent: Data validation has been enabled for the entire “Pass/Fail/Held” column with a dropdown list containing Pass, Fail, and Held. The file has been saved.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Data validation has been enabled for the entire “Pass/Fail/Held” column with a dropdown list containing Pass, Fail, and Held. The file has been saved.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 19:05:25,833 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 301\n",
+ " - prompt_tokens: 12178\n",
+ " - total_tokens: 12479\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9472\n",
+ " - response_cost: $0.0076\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 301\n",
+ " - prompt_tokens: 12178\n",
+ " - total_tokens: 12479\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 256\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 9472\n",
+ " - response_cost: $0.0076\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6060/7340 [219:08<46:17, 27.7 steps/min]\u001b[92m19:05:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6060/7340 [219:09<46:17, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6060/7340 [219:12<46:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6060/7340 [219:13<46:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/602e5cf1-7339-4b32-b8b1-64c7038c6b73/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6064/7340 [219:14<46:07, 27.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/365b6d93-b94b-4247-8bbf-35ffc55400bc/close \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6064/7340 [219:15<46:08, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:05:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:05:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:05:35,502 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 577})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 577})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<46:08, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/reset \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:19<46:06, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:05:39,082 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:09, 3.16s/it]27.7 steps/min]2025-08-11 19:05:39,775 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:05:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:05:40,436 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:05:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:23<46:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:06<00:02, 2.06s/it]2025-08-11 19:05:42,714 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:05:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:24<46:07, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.80s/it]\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:25<46:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:05:45,887 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:28<46:08, 27.6 steps/min]\u001b[92m19:05:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:05:47,224 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:05:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6065/7340 [219:29<46:08, 27.6 steps/min]\u001b[92m19:05:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:05:48,898 - agent.ComputerAgent - INFO - Computer: click({'x': 230, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 230, 'y': 194})\n",
+ " 83%|█████████████████████████████████-------| 6066/7340 [219:33<46:06, 27.6 steps/min]\u001b[92m19:05:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:05:52,644 - agent.ComputerAgent - INFO - Computer: click({'x': 591, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 591, 'y': 135})\n",
+ " 83%|█████████████████████████████████-------| 6067/7340 [219:35<46:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:05:55,303 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:05:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6067/7340 [219:37<46:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6067/7340 [219:39<46:05, 27.6 steps/min]\u001b[92m19:05:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:05:58,013 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 143})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6067/7340 [219:40<46:05, 27.6 steps/min]2025-08-11 19:05:59,154 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:05:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6068/7340 [219:41<46:03, 27.6 steps/min]\u001b[92m19:06:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:01,243 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 208})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 208})\n",
+ " 83%|█████████████████████████████████-------| 6069/7340 [219:43<46:01, 27.6 steps/min]\u001b[92m19:06:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:06:02,946 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 577, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 577, 'y': 430})\n",
+ " 83%|█████████████████████████████████-------| 6069/7340 [219:44<46:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6070/7340 [219:46<45:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:05,429 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6070/7340 [219:47<45:59, 27.6 steps/min]2025-08-11 19:06:06,105 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:06:06,786 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:06:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6070/7340 [219:48<45:59, 27.6 steps/min]2025-08-11 19:06:07,463 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:06:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6070/7340 [219:49<45:59, 27.6 steps/min]\u001b[92m19:06:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:08,665 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:06:08,667 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 133})\n",
+ " 83%|█████████████████████████████████-------| 6070/7340 [219:50<45:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6071/7340 [219:51<45:57, 27.6 steps/min]2025-08-11 19:06:10,356 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:06:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:06:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:06:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:06:12,063 - agent.ComputerAgent - INFO - Computer: click({'x': 983, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 983, 'y': 304})\n",
+ " 83%|█████████████████████████████████-------| 6072/7340 [219:55<45:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:06:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:15,775 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:06:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6072/7340 [219:57<45:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6072/7340 [219:58<45:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:17,963 - agent.ComputerAgent - INFO - Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 52, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6072/7340 [219:59<45:56, 27.6 steps/min]2025-08-11 19:06:18,606 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:06:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:06:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6073/7340 [220:02<45:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:22,696 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 524})\n",
+ " 83%|█████████████████████████████████-------| 6074/7340 [220:05<45:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:24,890 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 530})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 530})\n",
+ " 83%|█████████████████████████████████-------| 6074/7340 [220:06<45:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6075/7340 [220:07<45:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:27,219 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 83%|█████████████████████████████████-------| 6075/7340 [220:08<45:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:28,375 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:06:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6075/7340 [220:10<45:50, 27.6 steps/min]\u001b[92m19:06:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:30,077 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 474, 'y': 196}, {'x': 452, 'y': 721}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 474, 'y': 196}, {'x': 452, 'y': 721}]})\n",
+ " 83%|█████████████████████████████████-------| 6077/7340 [220:11<45:45, 27.6 steps/min]2025-08-11 19:06:30,737 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:06:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6078/7340 [220:13<45:43, 27.6 steps/min]2025-08-11 19:06:32,553 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:06:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6078/7340 [220:14<45:43, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9967e9e1-9446-4465-a911-ca5b69bde420/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:06:33,920 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 630, 'scroll_x': 0, 'x': 518, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 630, 'scroll_x': 0, 'x': 518, 'y': 420})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:35,273 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 19:06:35,946 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 12939\n",
+ " - total_tokens: 13142\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 7424\n",
+ " - response_cost: $0.0099\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 203\n",
+ " - prompt_tokens: 12939\n",
+ " - total_tokens: 13142\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 7424\n",
+ " - response_cost: $0.0099\n",
+ " 83%|█████████████████████████████████-------| 6079/7340 [220:17<45:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6080/7340 [220:18<45:39, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:06:37,807 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:06:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6080/7340 [220:19<45:39, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.89s/it]2025-08-11 19:06:39,524 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:06:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6080/7340 [220:21<45:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.77s/it]\u001b[92m19:06:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6080/7340 [220:22<45:40, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.67s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "2025-08-11 19:06:43,076 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:06:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6083/7340 [220:24<45:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:44,922 - agent.ComputerAgent - INFO - Agent: The desktop video is now playing in VLC. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The desktop video is now playing in VLC. Task completed.\n",
+ "2025-08-11 19:06:45,556 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 213\n",
+ " - prompt_tokens: 6060\n",
+ " - total_tokens: 6273\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 213\n",
+ " - prompt_tokens: 6060\n",
+ " - total_tokens: 6273\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0097\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:46,904 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4fc0beb9-a3df-4ca0-a4db-c42a24dcc166/close \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6084/7340 [220:28<45:30, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:47,570 - agent.ComputerAgent - INFO - Computer: click({'x': 532, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 532, 'y': 244})\n",
+ "\u001b[92m19:06:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:48,859 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:06:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6084/7340 [220:30<45:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:06:49,527 - agent.ComputerAgent - INFO - Computer: click({'x': 578, 'y': 259})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 578, 'y': 259})\n",
+ "\u001b[92m19:06:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:06:50,207 - agent.ComputerAgent - INFO - Computer: click({'x': 692, 'y': 480})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 692, 'y': 480})\n",
+ " 83%|█████████████████████████████████-------| 6085/7340 [220:31<45:29, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:06:51,399 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 392})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 392})\n",
+ " 83%|█████████████████████████████████-------| 6087/7340 [220:33<45:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:06:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:06:52,582 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 605, 'y': 171})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 605, 'y': 171})\n",
+ " 83%|█████████████████████████████████-------| 6088/7340 [220:34<45:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:54,397 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:06:54,399 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:55,719 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\npython3 calculator.py > log.txt\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\npython3 calculator.py > log.txt\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6089/7340 [220:38<45:19, 27.6 steps/min]\u001b[92m19:06:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:06:57,016 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:06:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:06:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:06:57,686 - agent.ComputerAgent - INFO - Computer: click({'x': 574, 'y': 256})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 574, 'y': 256})\n",
+ " 83%|█████████████████████████████████-------| 6104/7340 [220:39<44:40, 27.7 steps/min]2025-08-11 19:06:58,332 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:06:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:06:59,212 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]\u001b[92m19:06:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:40<44:38, 27.7 steps/min]2025-08-11 19:06:59,876 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:06:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/10c454d7-987f-4a23-83d6-534bd9ba42c2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:07:00,544 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:07:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:43<44:38, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.73s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c6aacbd6-6be0-4b63-afce-c2e86e28383c/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.43s/it]\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:45<44:39, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:04,726 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:07:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:46<44:39, 27.7 steps/min]2025-08-11 19:07:05,558 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:07:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:06,247 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:07:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:48<44:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:07:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:06,932 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 608, 'scroll_x': 0, 'x': 526, 'y': 422})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 608, 'scroll_x': 0, 'x': 526, 'y': 422})\n",
+ " 83%|█████████████████████████████████-------| 6105/7340 [220:49<44:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:09,672 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/34262f07-e5d2-47b9-913e-3f44032d779c/reset \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6107/7340 [220:52<44:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:12,022 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6107/7340 [220:53<44:35, 27.6 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:07:12,686 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:07:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6107/7340 [220:54<44:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<44:36, 27.6 steps/min]2025-08-11 19:07:14,460 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:07:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6107/7340 [220:57<44:36, 27.6 steps/min]2025-08-11 19:07:16,627 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:07:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/e076c76f-af63-43ad-a58d-7b09542ee5d9/close \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6122/7340 [221:00<43:58, 27.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]27.7 steps/min]\n",
+ " 83%|█████████████████████████████████-------| 6122/7340 [221:02<43:58, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:22,477 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ " 83%|█████████████████████████████████-------| 6122/7340 [221:04<43:58, 27.7 steps/min]\u001b[92m19:07:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:23,158 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 234})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:24,452 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:07:25,107 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:07:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6122/7340 [221:07<43:59, 27.7 steps/min]\u001b[92m19:07:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:26,468 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:07:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:07:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6123/7340 [221:09<43:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 83%|█████████████████████████████████-------| 6123/7340 [221:11<43:57, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:31,593 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ " 83%|█████████████████████████████████-------| 6123/7340 [221:14<43:58, 27.7 steps/min]\u001b[92m19:07:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:33,697 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:07:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 83%|█████████████████████████████████-------| 6124/7340 [221:15<43:56, 27.7 steps/min]\u001b[92m19:07:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:34,872 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 227})\n",
+ " 83%|█████████████████████████████████-------| 6124/7340 [221:16<43:56, 27.7 steps/min]\u001b[92m19:07:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:07:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:36,180 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:07:36,181 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 641})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 641})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:36,819 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ " 83%|█████████████████████████████████-------| 6125/7340 [221:18<43:54, 27.7 steps/min]\u001b[92m19:07:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:37,509 - agent.ComputerAgent - INFO - Computer: click({'x': 202, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 202, 'y': 165})\n",
+ "\u001b[92m19:07:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:07:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:38,140 - agent.ComputerAgent - INFO - Computer: double_click({'x': 611, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 611, 'y': 331})\n",
+ "2025-08-11 19:07:38,828 - agent.ComputerAgent - INFO - Computer: click({'x': 215, 'y': 68})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 215, 'y': 68})\n",
+ "\u001b[92m19:07:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ " 83%|█████████████████████████████████-------| 6126/7340 [221:20<43:51, 27.7 steps/min]2025-08-11 19:07:39,466 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 20, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 615, 'scroll_x': 0, 'x': 20, 'y': 304})\n",
+ "2025-08-11 19:07:40,136 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:07:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6129/7340 [221:21<43:44, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6130/7340 [221:23<43:42, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6130/7340 [221:25<43:42, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:07:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:44,567 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:07:44,568 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6130/7340 [221:26<43:42, 27.7 steps/min]2025-08-11 19:07:45,187 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:07:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:45,867 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:07:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6131/7340 [221:27<43:40, 27.7 steps/min]2025-08-11 19:07:46,537 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:07:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:47,184 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6131/7340 [221:28<43:40, 27.7 steps/min]2025-08-11 19:07:47,875 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:07:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6131/7340 [221:30<43:40, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:49,192 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:07:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:07:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:50,268 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 532, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 532, 'y': 244})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6131/7340 [221:32<43:41, 27.7 steps/min]2025-08-11 19:07:51,617 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:07:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6132/7340 [221:34<43:38, 27.7 steps/min]\u001b[92m19:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:07:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:07:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:53,461 - agent.ComputerAgent - INFO - Computer: click({'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 512, 'y': 384})\n",
+ " 84%|█████████████████████████████████-------| 6132/7340 [221:35<43:39, 27.7 steps/min]\u001b[92m19:07:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:54,111 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 379, 'y': 377}, {'x': 586, 'y': 428}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 379, 'y': 377}, {'x': 586, 'y': 428}]})\n",
+ " 84%|█████████████████████████████████-------| 6133/7340 [221:36<43:36, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:07:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:56,120 - agent.ComputerAgent - INFO - Computer: type({'text': '00ff00'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '00ff00'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 84%|█████████████████████████████████-------| 6134/7340 [221:38<43:34, 27.7 steps/min]\u001b[92m19:07:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:07:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:07:57,461 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 93})\n",
+ "\u001b[92m19:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:07:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6135/7340 [221:39<43:32, 27.7 steps/min]\u001b[92m19:07:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:07:58,612 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 474, 'y': 176}, {'x': 452, 'y': 715}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 474, 'y': 176}, {'x': 452, 'y': 715}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:07:59,968 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6136/7340 [221:41<43:30, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:01,115 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:08:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6138/7340 [221:42<43:25, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6138/7340 [221:43<43:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:03,327 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:08:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6138/7340 [221:45<43:25, 27.7 steps/min]2025-08-11 19:08:03,987 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:08:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:04,677 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:08:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6138/7340 [221:46<43:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:05,837 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:08:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6138/7340 [221:48<43:26, 27.7 steps/min]\u001b[92m19:08:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:07,157 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:08:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:08:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:07,859 - agent.ComputerAgent - INFO - Computer: click({'x': 576, 'y': 146})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 576, 'y': 146})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6139/7340 [221:50<43:24, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6139/7340 [221:52<43:24, 27.7 steps/min]\u001b[92m19:08:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:08:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/96b656db-b210-453a-9230-f958f621d7b6/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6139/7340 [221:54<43:24, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:13,059 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 278, 'y': 354})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 648, 'scroll_x': 0, 'x': 278, 'y': 354})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:13,711 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6139/7340 [221:56<43:25, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:15,729 - agent.ComputerAgent - INFO - Computer: click({'x': 699, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 699, 'y': 227})\n",
+ "2025-08-11 19:08:16,387 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 110})\n",
+ " 84%|█████████████████████████████████-------| 6141/7340 [221:58<43:20, 27.7 steps/min]\u001b[92m19:08:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:08:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:17,739 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 624})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 624})\n",
+ "\u001b[92m19:08:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:19,060 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+p'})\n",
+ "2025-08-11 19:08:19,692 - agent.ComputerAgent - INFO - Computer: click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 12, 'y': 524})\n",
+ "\u001b[92m19:08:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6143/7340 [222:02<43:15, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:21,041 - agent.ComputerAgent - INFO - Computer: click({'x': 189, 'y': 313})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 189, 'y': 313})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:21,686 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6145/7340 [222:03<43:11, 27.7 steps/min]\u001b[92m19:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:22,872 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:08:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:08:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:23,539 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:08:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:08:24,184 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 454, 'y': 719}, {'x': 698, 'y': 712}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 454, 'y': 719}, {'x': 698, 'y': 712}]})\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:06<43:06, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:07<43:06, 27.7 steps/min]2025-08-11 19:08:27,359 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:08:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:09<43:06, 27.7 steps/min]2025-08-11 19:08:27,988 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:08:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:28,638 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:08:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:10<43:07, 27.7 steps/min]2025-08-11 19:08:29,339 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:08:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:08:30,016 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:08:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:12<43:07, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:31,368 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:08:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:32,060 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:08:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6147/7340 [222:13<43:07, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:32,751 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:08:32,751 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 142})\n",
+ "2025-08-11 19:08:33,416 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:08:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6151/7340 [222:16<42:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/27fc4825-1617-494a-9308-b128bd8af05a/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:36,515 - agent.ComputerAgent - INFO - Computer: type({'text': '00ff00'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '00ff00'})\n",
+ " 84%|█████████████████████████████████-------| 6151/7340 [222:18<42:58, 27.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:20<42:56, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:21<42:56, 27.7 steps/min]2025-08-11 19:08:40,198 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:08:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<42:56, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]27.7 steps/min]2025-08-11 19:08:43,968 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:08:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:25<42:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 19:08:46,053 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+space'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+space'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]27.7 steps/min]2025-08-11 19:08:47,198 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:29<42:57, 27.7 steps/min]\u001b[92m19:08:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:30<42:58, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:32<42:58, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:51,113 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 92})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 92})\n",
+ "\u001b[92m19:08:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:51,781 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:08:51,782 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:53,782 - agent.ComputerAgent - INFO - Computer: type({'text': 'if [ -d /tmp/test_files ]; then echo \"Directory exists\"; else echo \"Directory not found\"; fi'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'if [ -d /tmp/test_files ]; then echo \"Directory exists\"; else echo \"Directory not found\"; fi'})\n",
+ " 84%|█████████████████████████████████-------| 6152/7340 [222:35<42:59, 27.6 steps/min]2025-08-11 19:08:54,469 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ "\u001b[92m19:08:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:08:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:08:55,167 - agent.ComputerAgent - INFO - Computer: click({'x': 71, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 71, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:08:56,520 - agent.ComputerAgent - INFO - Computer: keykeypress({'keys': 'shift'})\n",
+ "INFO:agent.ComputerAgent:Computer: keykeypress({'keys': 'shift'})\n",
+ "2025-08-11 19:08:56,520 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:08:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Unknown computer action: keykeypress\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "\u001b[92m19:08:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6155/7340 [222:38<42:51, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:57,211 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 483, 'scroll_x': 0, 'x': 274, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 483, 'scroll_x': 0, 'x': 274, 'y': 356})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:08:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:08:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6158/7340 [222:40<42:44, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:08:59,585 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 425, 'y': 692}, {'x': 704, 'y': 690}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 425, 'y': 692}, {'x': 704, 'y': 690}]})\n",
+ "\u001b[92m19:08:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:09:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:00,913 - agent.ComputerAgent - INFO - Computer: click({'x': 21, 'y': 139})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 21, 'y': 139})\n",
+ "\u001b[92m19:09:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6158/7340 [222:42<42:44, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:09:01,543 - agent.ComputerAgent - INFO - Computer: click({'x': 216, 'y': 312})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 216, 'y': 312})\n",
+ "\u001b[92m19:09:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:02,224 - agent.ComputerAgent - INFO - Computer: click({'x': 169, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 169, 'y': 232})\n",
+ " 84%|█████████████████████████████████-------| 6160/7340 [222:43<42:39, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 84%|█████████████████████████████████-------| 6163/7340 [222:44<42:32, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:09:04,379 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:09:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6163/7340 [222:46<42:32, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6163/7340 [222:47<42:32, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:06,577 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:09:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6163/7340 [222:48<42:33, 27.7 steps/min]2025-08-11 19:09:08,379 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:09:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:09,724 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6163/7340 [222:51<42:33, 27.7 steps/min]2025-08-11 19:09:10,386 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:09:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:09:11,039 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:09:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 84%|█████████████████████████████████-------| 6168/7340 [222:52<42:21, 27.7 steps/min]2025-08-11 19:09:12,201 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6168/7340 [222:53<42:21, 27.7 steps/min]2025-08-11 19:09:12,886 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:09:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/43a383a0-163d-4a8b-8494-0e1d1eab6cd6/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:13,532 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:09:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6168/7340 [222:55<42:21, 27.7 steps/min]2025-08-11 19:09:15,263 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6168/7340 [222:57<42:21, 27.7 steps/min]2025-08-11 19:09:15,939 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:09:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:17,294 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ " 84%|█████████████████████████████████-------| 6168/7340 [222:59<42:22, 27.7 steps/min]2025-08-11 19:09:17,979 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:09:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:09:18,632 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:09:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6169/7340 [223:00<42:19, 27.7 steps/min]2025-08-11 19:09:19,327 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:09:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:20,029 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:09:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6169/7340 [223:01<42:20, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:09:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6169/7340 [223:02<42:20, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 84%|█████████████████████████████████-------| 6170/7340 [223:03<42:17, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:23,030 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:09:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.81s/it]2025-08-11 19:09:24,534 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/liangjs333/4th-year-in-tsinghua-eng'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/liangjs333/4th-year-in-tsinghua-eng'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6170/7340 [223:06<42:18, 27.7 steps/min]2025-08-11 19:09:25,328 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.71s/it]INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:09:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:09:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]27.7 steps/min]\n",
+ " 84%|█████████████████████████████████-------| 6171/7340 [223:10<42:16, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:09:30,375 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 84%|█████████████████████████████████-------| 6171/7340 [223:12<42:16, 27.6 steps/min]\u001b[92m19:09:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:31,053 - agent.ComputerAgent - INFO - Computer: click({'x': 859, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 859, 'y': 247})\n",
+ "\u001b[92m19:09:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:31,737 - agent.ComputerAgent - INFO - Computer: click({'x': 785, 'y': 715})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 785, 'y': 715})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6172/7340 [223:13<42:14, 27.6 steps/min]2025-08-11 19:09:32,410 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:09:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6174/7340 [223:14<42:09, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:33,784 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:35,174 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f8'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f8'})\n",
+ " 84%|█████████████████████████████████-------| 6174/7340 [223:16<42:10, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 84%|█████████████████████████████████-------| 6177/7340 [223:17<42:02, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:09:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:39,211 - agent.ComputerAgent - INFO - Computer: type({'text': '00ff00'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '00ff00'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6177/7340 [223:20<42:03, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:09:39,891 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:09:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:09:41,245 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:42,586 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+f3'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+f3'})\n",
+ "\u001b[92m19:09:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:43,931 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6178/7340 [223:25<42:01, 27.7 steps/min]2025-08-11 19:09:44,639 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 626, 'scroll_x': 0, 'x': 270, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 626, 'scroll_x': 0, 'x': 270, 'y': 356})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:09:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:45,940 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:09:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:09:46,640 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:09:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6180/7340 [223:29<41:56, 27.7 steps/min]\u001b[92m19:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:47,977 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:09:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:09:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:48,649 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:09:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6181/7340 [223:30<41:54, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:09:49,339 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:09:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:09:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/258c9010-cdb1-400f-b018-bddcd76c5664/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:50,037 - agent.ComputerAgent - INFO - Computer: click({'x': 122, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 122, 'y': 243})\n",
+ " 84%|█████████████████████████████████-------| 6181/7340 [223:32<41:54, 27.7 steps/min]\u001b[92m19:09:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:09:51,332 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:09:51,332 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 918, 'y': 50}, {'x': 984, 'y': 629}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 918, 'y': 50}, {'x': 984, 'y': 629}]})\n",
+ " 84%|█████████████████████████████████-------| 6182/7340 [223:33<41:52, 27.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:34<41:50, 27.7 steps/min]2025-08-11 19:09:53,498 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:09:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:35<41:50, 27.7 steps/min]2025-08-11 19:09:54,690 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:09:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:36<41:50, 27.7 steps/min]2025-08-11 19:09:55,380 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:09:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:09:56,078 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:09:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2960baac-68ef-44af-8d6c-fe4b45263791/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:38<41:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:09:58,267 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:09:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:40<41:51, 27.6 steps/min]\u001b[92m19:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:09:59,829 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:09:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:02,403 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://drive.google.com/uc?export=download&id=1VIwIhLpkRr72DTuJINsZQ-DDwHAtfddq\\n'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]INFO:agent.ComputerAgent:Computer: type({'text': 'https://drive.google.com/uc?export=download&id=1VIwIhLpkRr72DTuJINsZQ-DDwHAtfddq\\n'})\n",
+ " 84%|█████████████████████████████████-------| 6183/7340 [223:44<41:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/4d0f1943-7dac-45a8-a354-73c43955694a/reset \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 19:10:03,550 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:10:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 84%|█████████████████████████████████-------| 6184/7340 [223:45<41:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f55deafd-5880-4477-aaf2-d27143befb59/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6184/7340 [223:46<41:49, 27.6 steps/min]2025-08-11 19:10:05,753 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:10:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:07,054 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ " 84%|█████████████████████████████████-------| 6184/7340 [223:48<41:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:10:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:08,375 - agent.ComputerAgent - INFO - Computer: click({'x': 143, 'y': 634})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 143, 'y': 634})\n",
+ "\u001b[92m19:10:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:10:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:09,049 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:10:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:09,700 - agent.ComputerAgent - INFO - Computer: click({'x': 147, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 147, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6185/7340 [223:51<41:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:11,051 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:10:11,053 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "2025-08-11 19:10:11,739 - agent.ComputerAgent - INFO - Computer: click({'x': 298, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 298, 'y': 457})\n",
+ "\u001b[92m19:10:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6187/7340 [223:53<41:43, 27.6 steps/min]2025-08-11 19:10:12,438 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 731, 'y': 709}, {'x': 412, 'y': 715}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 731, 'y': 709}, {'x': 412, 'y': 715}]})\n",
+ "2025-08-11 19:10:13,110 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6189/7340 [223:55<41:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:15,148 - agent.ComputerAgent - INFO - Computer: type({'text': \"find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print -exec gzip -f {} +\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print -exec gzip -f {} +\"})\n",
+ "\u001b[92m19:10:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6190/7340 [223:56<41:36, 27.6 steps/min]2025-08-11 19:10:15,800 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:10:15,801 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 335})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6191/7340 [223:58<41:34, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:17,116 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:10:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:17,772 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 669, 'y': 625})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 669, 'y': 625})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6192/7340 [224:00<41:31, 27.6 steps/min]\u001b[92m19:10:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:10:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:19,661 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 637, 'scroll_x': 0, 'x': 264, 'y': 356})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 637, 'scroll_x': 0, 'x': 264, 'y': 356})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6193/7340 [224:02<41:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:22,019 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6194/7340 [224:03<41:27, 27.6 steps/min]2025-08-11 19:10:22,682 - agent.ComputerAgent - INFO - Computer: click({'x': 193, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 193, 'y': 314})\n",
+ "2025-08-11 19:10:23,360 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:10:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:24,040 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:10:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6195/7340 [224:06<41:25, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:25,420 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:10:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:26,120 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:10:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:26,810 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:10:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:27,500 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6196/7340 [224:09<41:23, 27.6 steps/min]2025-08-11 19:10:28,162 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:10:28,163 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 427})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:29,908 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'home'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'home'})\n",
+ " 84%|█████████████████████████████████-------| 6196/7340 [224:11<41:23, 27.6 steps/min]2025-08-11 19:10:30,527 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:10:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:31,199 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:10:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:32,551 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ " 84%|█████████████████████████████████-------| 6198/7340 [224:14<41:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:10:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6199/7340 [224:15<41:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:35,216 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ "\u001b[92m19:10:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:10:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 84%|█████████████████████████████████-------| 6199/7340 [224:16<41:16, 27.6 steps/min]2025-08-11 19:10:35,903 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:10:35,904 - agent.ComputerAgent - INFO - Computer: double_click({'x': 367, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 367, 'y': 105})\n",
+ "\u001b[92m19:10:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:36,539 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:10:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:10:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 84%|█████████████████████████████████-------| 6200/7340 [224:20<41:14, 27.6 steps/min]\u001b[92m19:10:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:39,334 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:39,976 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 918, 'y': 160}, {'x': 987, 'y': 627}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 918, 'y': 160}, {'x': 987, 'y': 627}]})\n",
+ "\u001b[92m19:10:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:10:40,622 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:10:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:41,987 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 84%|█████████████████████████████████-------| 6201/7340 [224:23<41:13, 27.6 steps/min]2025-08-11 19:10:42,653 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 217})\n",
+ "\u001b[92m19:10:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:43,315 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 613})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 613})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:44,670 - agent.ComputerAgent - INFO - Computer: type({'text': 'liangjs333 4th-year-in-tsinghua-eng'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'liangjs333 4th-year-in-tsinghua-eng'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6203/7340 [224:27<41:08, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:45,984 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:10:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:46,662 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:10:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6206/7340 [224:28<41:01, 27.6 steps/min]2025-08-11 19:10:47,340 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:10:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:48,031 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 731, 'y': 712}, {'x': 416, 'y': 715}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 731, 'y': 712}, {'x': 416, 'y': 715}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:48,701 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:10:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6207/7340 [224:31<40:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:51,106 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6207/7340 [224:33<40:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:53,151 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/84a5d283-63f1-43fc-b483-76116d67f385/reset \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6208/7340 [224:34<40:57, 27.6 steps/min]2025-08-11 19:10:53,832 - agent.ComputerAgent - INFO - Computer: click({'x': 711, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 711, 'y': 60})\n",
+ "2025-08-11 19:10:54,481 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:10:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:55,132 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:10:55,782 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ " 85%|█████████████████████████████████-------| 6208/7340 [224:37<40:57, 27.6 steps/min]\u001b[92m19:10:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:56,440 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:10:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:10:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6209/7340 [224:38<40:55, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:10:57,737 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:10:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:10:58,389 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:10:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6209/7340 [224:40<40:55, 27.6 steps/min]2025-08-11 19:10:59,096 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 987, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 987, 'y': 559})\n",
+ "2025-08-11 19:10:59,731 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:10:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6210/7340 [224:41<40:53, 27.6 steps/min]2025-08-11 19:11:00,382 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:11:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:01,041 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6210/7340 [224:42<40:53, 27.6 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:11:01,701 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m19:11:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:03,091 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x01'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x01'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6210/7340 [224:44<40:53, 27.6 steps/min]2025-08-11 19:11:03,741 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:11:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:04,431 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:11:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6211/7340 [224:46<40:51, 27.6 steps/min]2025-08-11 19:11:05,082 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:11:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:06,395 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:07,757 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ " 85%|█████████████████████████████████-------| 6211/7340 [224:49<40:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:09,091 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:11:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:09,752 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ " 85%|█████████████████████████████████-------| 6211/7340 [224:51<40:52, 27.6 steps/min]\u001b[92m19:11:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:11,082 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 483})\n",
+ " 85%|█████████████████████████████████-------| 6211/7340 [224:52<40:52, 27.6 steps/min]\u001b[92m19:11:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:11,766 - agent.ComputerAgent - INFO - Computer: click({'x': 392, 'y': 448})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 392, 'y': 448})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:13,099 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:14,472 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6212/7340 [224:58<40:51, 27.6 steps/min]\u001b[92m19:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:17,851 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+right'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+right'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:18,516 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 629, 'scroll_x': 0, 'x': 20, 'y': 138})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 629, 'scroll_x': 0, 'x': 20, 'y': 138})\n",
+ " 85%|█████████████████████████████████-------| 6215/7340 [225:00<40:43, 27.6 steps/min]\u001b[92m19:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:19,184 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:19,880 - agent.ComputerAgent - INFO - Computer: click({'x': 459, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 459, 'y': 64})\n",
+ "\u001b[92m19:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6216/7340 [225:01<40:41, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:20,563 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:22,601 - agent.ComputerAgent - INFO - Computer: type({'text': 'ls -lR /tmp/test_files && echo \"-----\" && find /tmp/test_files -type f -mtime 30 -print'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'ls -lR /tmp/test_files && echo \"-----\" && find /tmp/test_files -type f -mtime 30 -print'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6217/7340 [225:04<40:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:23,900 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 731, 'y': 710}, {'x': 420, 'y': 715}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 731, 'y': 710}, {'x': 420, 'y': 715}]})\n",
+ "\u001b[92m19:11:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:24,533 - agent.ComputerAgent - INFO - Computer: click({'x': 923, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 923, 'y': 134})\n",
+ " 85%|█████████████████████████████████-------| 6219/7340 [225:06<40:34, 27.6 steps/min]\u001b[92m19:11:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:25,223 - agent.ComputerAgent - INFO - Computer: click({'x': 993, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 993, 'y': 760})\n",
+ " 85%|█████████████████████████████████-------| 6221/7340 [225:07<40:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:26,412 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m19:11:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6222/7340 [225:08<40:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6222/7340 [225:09<40:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:11:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:29,003 - agent.ComputerAgent - INFO - Computer: click({'x': 914, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 914, 'y': 270})\n",
+ " 85%|█████████████████████████████████-------| 6222/7340 [225:10<40:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:29,691 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:11:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:31,027 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 918, 'y': 221}, {'x': 987, 'y': 629}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 918, 'y': 221}, {'x': 987, 'y': 629}]})\n",
+ "2025-08-11 19:11:31,697 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:11:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6223/7340 [225:13<40:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:32,342 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:11:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:33,011 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:33,703 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:11:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6224/7340 [225:15<40:23, 27.6 steps/min]2025-08-11 19:11:34,365 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:11:34,366 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 10})\n",
+ "2025-08-11 19:11:34,994 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:11:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6224/7340 [225:17<40:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:37,052 - agent.ComputerAgent - INFO - Computer: click({'x': 161, 'y': 71, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 161, 'y': 71, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/40945322-97a1-4827-b747-39d3f993fa3d/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:37,710 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6226/7340 [225:19<40:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:11:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:38,730 - agent.ComputerAgent - INFO - Computer: click({'x': 159, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 159, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/reset \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6227/7340 [225:20<40:16, 27.6 steps/min]2025-08-11 19:11:39,390 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:11:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:40,050 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:11:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6228/7340 [225:22<40:14, 27.6 steps/min]2025-08-11 19:11:40,724 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:11:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:42,066 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://drive.google.com/uc?export=download&id=1VIwIhLpkRr72DTuJINsZQ-DDwHAtfddq\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://drive.google.com/uc?export=download&id=1VIwIhLpkRr72DTuJINsZQ-DDwHAtfddq\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6228/7340 [225:23<40:14, 27.6 steps/min]2025-08-11 19:11:42,753 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:11:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:44,139 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ "2025-08-11 19:11:44,810 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:46,135 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+b'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6229/7340 [225:28<40:12, 27.6 steps/min]\u001b[92m19:11:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:47,501 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:11:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6230/7340 [225:29<40:10, 27.6 steps/min]\u001b[92m19:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:11:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:49,502 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:11:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6231/7340 [225:31<40:08, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:50,196 - agent.ComputerAgent - INFO - Computer: click({'x': 673, 'y': 476})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 673, 'y': 476})\n",
+ "2025-08-11 19:11:50,857 - agent.ComputerAgent - INFO - Computer: click({'x': 331, 'y': 261})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 331, 'y': 261})\n",
+ "2025-08-11 19:11:51,494 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6231/7340 [225:33<40:08, 27.6 steps/min]2025-08-11 19:11:52,197 - agent.ComputerAgent - INFO - Computer: double_click({'x': 367, 'y': 106})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 367, 'y': 106})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|█████████████████████████████████-------| 6233/7340 [225:34<40:03, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:11:53,523 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:11:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:11:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:11:54,202 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:11:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:54,875 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 244})\n",
+ "2025-08-11 19:11:55,543 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:11:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:11:56,244 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:11:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|█████████████████████████████████-------| 6234/7340 [225:37<40:01, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:11:57,937 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:11:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:11:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:01,305 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 85%|█████████████████████████████████-------| 6235/7340 [225:43<40:00, 27.6 steps/min]2025-08-11 19:12:01,940 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:02,592 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:12:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:03,652 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "\u001b[92m19:12:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|█████████████████████████████████-------| 6237/7340 [225:45<39:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:04,317 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 217})\n",
+ "2025-08-11 19:12:05,018 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 573, 'scroll_x': 0, 'x': 20, 'y': 304})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 573, 'scroll_x': 0, 'x': 20, 'y': 304})\n",
+ " 85%|█████████████████████████████████-------| 6238/7340 [225:46<39:53, 27.6 steps/min]2025-08-11 19:12:05,683 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:12:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|██████████████████████████████████------| 6240/7340 [225:47<39:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:07,334 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:12:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:09,231 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x1b[3~'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x1b[3~'})\n",
+ " 85%|██████████████████████████████████------| 6240/7340 [225:50<39:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:12:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:10,553 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:12:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 85%|██████████████████████████████████------| 6242/7340 [225:52<39:44, 27.6 steps/min]\u001b[92m19:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:11,869 - agent.ComputerAgent - INFO - Computer: click({'x': 772, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 772, 'y': 129})\n",
+ "\u001b[92m19:12:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:12,495 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:12:12,496 - agent.ComputerAgent - INFO - Computer: click({'x': 659, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 659, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6242/7340 [225:54<39:44, 27.6 steps/min]2025-08-11 19:12:13,186 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:12:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:13,847 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 351})\n",
+ " 85%|██████████████████████████████████------| 6244/7340 [225:55<39:39, 27.6 steps/min]2025-08-11 19:12:14,474 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:12:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:15,153 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:12:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6245/7340 [225:57<39:37, 27.6 steps/min]2025-08-11 19:12:15,843 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:12:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:16,513 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:12:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:17,568 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:12:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6245/7340 [226:00<39:37, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:18,943 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:12:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:20,385 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6245/7340 [226:02<39:37, 27.6 steps/min]2025-08-11 19:12:21,075 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:12:21,076 - agent.ComputerAgent - INFO - Computer: click({'x': 918, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 918, 'y': 217})\n",
+ "2025-08-11 19:12:21,754 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:12:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:23,773 - agent.ComputerAgent - INFO - Computer: click({'x': 321, 'y': 305, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 321, 'y': 305, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6245/7340 [226:06<39:38, 27.6 steps/min]\u001b[92m19:12:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:25,144 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:25,824 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:12:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:12:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:27,819 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 85%|██████████████████████████████████------| 6247/7340 [226:09<39:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m19:12:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:29,160 - agent.ComputerAgent - INFO - Computer: click({'x': 207, 'y': 315})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 207, 'y': 315})\n",
+ "2025-08-11 19:12:29,827 - agent.ComputerAgent - INFO - Computer: click({'x': 999, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 999, 'y': 760})\n",
+ "\u001b[92m19:12:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6249/7340 [226:11<39:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:30,484 - agent.ComputerAgent - INFO - Computer: click({'x': 873, 'y': 262})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 873, 'y': 262})\n",
+ "\u001b[92m19:12:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:31,529 - agent.ComputerAgent - INFO - Computer: click({'x': 682, 'y': 473})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 682, 'y': 473})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6251/7340 [226:13<39:24, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:32,884 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:12:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:33,565 - agent.ComputerAgent - INFO - Computer: click({'x': 291, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 291, 'y': 101})\n",
+ " 85%|██████████████████████████████████------| 6253/7340 [226:15<39:19, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:34,226 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:12:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|██████████████████████████████████------| 6254/7340 [226:16<39:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 85%|██████████████████████████████████------| 6254/7340 [226:17<39:17, 27.6 steps/min]\u001b[92m19:12:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:36,607 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 254})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:37,982 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+l'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6254/7340 [226:19<39:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:38,619 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:12:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:39,304 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:12:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:39,973 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:41,961 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f'})\n",
+ " 85%|██████████████████████████████████------| 6255/7340 [226:23<39:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:42,664 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:12:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:43,354 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:12:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:12:44,034 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:44,694 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:12:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|██████████████████████████████████------| 6257/7340 [226:26<39:11, 27.6 steps/min]2025-08-11 19:12:45,354 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6257/7340 [226:27<39:11, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:46,694 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:48,427 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 987, 'y': 148}, {'x': 984, 'y': 548}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 987, 'y': 148}, {'x': 984, 'y': 548}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6257/7340 [226:30<39:12, 27.6 steps/min]\u001b[92m19:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:12:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:50,294 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:12:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6261/7340 [226:32<39:02, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:50,941 - agent.ComputerAgent - INFO - Computer: click({'x': 677, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 677, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1e0da601-e961-4fe5-ac6b-06f530294395/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:12:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:52,985 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://arxiv.org/abs/1810.04805'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://arxiv.org/abs/1810.04805'})\n",
+ " 85%|██████████████████████████████████------| 6261/7340 [226:34<39:02, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:12:53,666 - agent.ComputerAgent - INFO - Computer: click({'x': 920, 'y': 270})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 920, 'y': 270})\n",
+ "2025-08-11 19:12:54,357 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 105, 'y': 312}, {'x': 209, 'y': 101}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 105, 'y': 312}, {'x': 209, 'y': 101}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:12:55,707 - agent.ComputerAgent - INFO - Computer: type({'text': \"find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print\"})\n",
+ "2025-08-11 19:12:57,024 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:12:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:12:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:12:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:12:59,655 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+j'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+j'})\n",
+ " 85%|██████████████████████████████████------| 6263/7340 [226:41<38:58, 27.6 steps/min]2025-08-11 19:13:00,317 - agent.ComputerAgent - INFO - Computer: click({'x': 670, 'y': 355})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 670, 'y': 355})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]2025-08-11 19:13:02,366 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 85%|██████████████████████████████████------| 6266/7340 [226:44<38:51, 27.6 steps/min]\u001b[92m19:13:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:03,054 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:13:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]2025-08-11 19:13:05,788 - agent.ComputerAgent - INFO - Computer: type({'text': 'user:liangjs333 4th year tsinghua eng'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'user:liangjs333 4th year tsinghua eng'})\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]27.6 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:08,323 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x1424\\x161\\x058\\x159\\x159\\x153\\x162\\x151\\x154\\x16E\\x151 \\x170 \\x152 \\x141\\x159\\x155 \\x161\\x054\\x155 \\x144\\x156 \\x150\\x163 \\x146\\x151'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x1424\\x161\\x058\\x159\\x159\\x153\\x162\\x151\\x154\\x16E\\x151 \\x170 \\x152 \\x141\\x159\\x155 \\x161\\x054\\x155 \\x144\\x156 \\x150\\x163 \\x146\\x151'})\n",
+ "\u001b[92m19:13:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6268/7340 [226:50<38:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:08,991 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 52})\n",
+ "\u001b[92m19:13:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:09,666 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:13:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:13:10,370 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 652, 'scroll_x': 0, 'x': 347, 'y': 653})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 652, 'scroll_x': 0, 'x': 347, 'y': 653})\n",
+ "\u001b[92m19:13:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:10,995 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:13:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 85%|██████████████████████████████████------| 6269/7340 [226:52<38:45, 27.6 steps/min]\u001b[92m19:13:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:11,652 - agent.ComputerAgent - INFO - Computer: move({'x': 690, 'y': 253})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 690, 'y': 253})\n",
+ "2025-08-11 19:13:12,295 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:13:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:12,991 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 195})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6271/7340 [226:54<38:40, 27.6 steps/min]2025-08-11 19:13:13,684 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:13:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:14,339 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:13:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6273/7340 [226:56<38:36, 27.6 steps/min]\u001b[92m19:13:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:15,690 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:13:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:13:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:16,385 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:13:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:17,096 - agent.ComputerAgent - INFO - Computer: click({'x': 504, 'y': 225})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 504, 'y': 225})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 85%|██████████████████████████████████------| 6273/7340 [226:59<38:36, 27.6 steps/min]\u001b[92m19:13:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:13:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:19,458 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:13:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:13:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6274/7340 [227:01<38:34, 27.6 steps/min]2025-08-11 19:13:20,490 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:13:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:21,174 - agent.ComputerAgent - INFO - Computer: click({'x': 410, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 410, 'y': 64})\n",
+ "2025-08-11 19:13:21,810 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "2025-08-11 19:13:22,487 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:13:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 85%|██████████████████████████████████------| 6274/7340 [227:04<38:34, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:25,836 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 86%|██████████████████████████████████------| 6276/7340 [227:07<38:30, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:26,499 - agent.ComputerAgent - INFO - Computer: click({'x': 1005, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1005, 'y': 101})\n",
+ "\u001b[92m19:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:27,167 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 64})\n",
+ "\u001b[92m19:13:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6277/7340 [227:09<38:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:28,480 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:13:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:29,146 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:13:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:29,817 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:13:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:13:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6279/7340 [227:12<38:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:31,197 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 925, 'y': 313}, {'x': 984, 'y': 658}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 925, 'y': 313}, {'x': 984, 'y': 658}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:32,470 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "2025-08-11 19:13:33,090 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:33,778 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 217})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 217})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6279/7340 [227:16<38:24, 27.6 steps/min]2025-08-11 19:13:35,922 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 389})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 389})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6283/7340 [227:17<38:14, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:36,567 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:13:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:37,858 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 43})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 43})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/81e43616-5be3-4846-b466-62247641452b/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 86%|██████████████████████████████████------| 6284/7340 [227:20<38:12, 27.6 steps/min]\u001b[92m19:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:13:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:39,877 - agent.ComputerAgent - INFO - Computer: click({'x': 145, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 145, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6285/7340 [227:22<38:09, 27.6 steps/min]\u001b[92m19:13:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:13:41,219 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 249})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:13:42,533 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:13:43,914 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.71s/it]\u001b[92m19:13:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6286/7340 [227:26<38:08, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]2025-08-11 19:13:46,296 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:13:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:46,976 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:13:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6289/7340 [227:29<38:01, 27.6 steps/min]\u001b[92m19:13:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "2025-08-11 19:13:48,476 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:13:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:49,333 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:13:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6289/7340 [227:31<38:01, 27.6 steps/min]2025-08-11 19:13:49,986 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:13:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6289/7340 [227:32<38:01, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:51,782 - agent.ComputerAgent - INFO - Computer: click({'x': 594, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 594, 'y': 257})\n",
+ " 86%|██████████████████████████████████------| 6289/7340 [227:33<38:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:13:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:52,496 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 124})\n",
+ "\u001b[92m19:13:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:13:53,154 - agent.ComputerAgent - INFO - Computer: click({'x': 923, 'y': 292})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 923, 'y': 292})\n",
+ "\u001b[92m19:13:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:13:54,469 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://arxiv.org/abs/1810.04805'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://arxiv.org/abs/1810.04805'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6290/7340 [227:36<37:59, 27.6 steps/min]2025-08-11 19:13:55,139 - agent.ComputerAgent - INFO - Computer: click({'x': 280, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 280, 'y': 91})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:13:55,816 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:13:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:56,497 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:13:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:57,196 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:13:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6293/7340 [227:39<37:52, 27.6 steps/min]2025-08-11 19:13:58,275 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6294/7340 [227:40<37:50, 27.6 steps/min]2025-08-11 19:13:58,935 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:13:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:13:59,576 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:13:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6294/7340 [227:41<37:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:14:01,266 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:14:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6294/7340 [227:43<37:50, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 86%|██████████████████████████████████------| 6294/7340 [227:44<37:50, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:03,790 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 254})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6294/7340 [227:45<37:51, 27.6 steps/min]\u001b[92m19:14:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:04,459 - agent.ComputerAgent - INFO - Computer: click({'x': 973, 'y': 170})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 973, 'y': 170})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:14:05,475 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:14:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6295/7340 [227:47<37:48, 27.6 steps/min]2025-08-11 19:14:06,162 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:14:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:06,797 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:14:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:14:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6296/7340 [227:50<37:46, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:09,538 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:14:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:14:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6296/7340 [227:52<37:47, 27.6 steps/min]\u001b[92m19:14:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:10,891 - agent.ComputerAgent - INFO - Computer: click({'x': 475, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 475, 'y': 172})\n",
+ "\u001b[92m19:14:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:11,572 - agent.ComputerAgent - INFO - Computer: click({'x': 973, 'y': 228})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 973, 'y': 228})\n",
+ "\u001b[92m19:14:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 86%|██████████████████████████████████------| 6297/7340 [227:53<37:44, 27.6 steps/min]\u001b[92m19:14:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:12,934 - agent.ComputerAgent - INFO - Agent: I opened a terminal and ran:\n",
+ "find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print -exec gzip -f {} +\n",
+ "\n",
+ "Verification:\n",
+ "- Listing after compression shows the 30-day-old files are now .gz.\n",
+ "- Running:\n",
+ "find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print\n",
+ "returned no results, confirming all matching files were compressed and originals replaced.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I opened a terminal and ran:\n",
+ "find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print -exec gzip -f {} +\n",
+ "\n",
+ "Verification:\n",
+ "- Listing after compression shows the 30-day-old files are now .gz.\n",
+ "- Running:\n",
+ "find /tmp/test_files -type f -mtime 30 -not -name '*.gz' -print\n",
+ "returned no results, confirming all matching files were compressed and originals replaced.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 19:14:13,557 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 808\n",
+ " - prompt_tokens: 10818\n",
+ " - total_tokens: 11626\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0216\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 808\n",
+ " - prompt_tokens: 10818\n",
+ " - total_tokens: 11626\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 704\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0216\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:14,910 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6299/7340 [227:56<37:40, 27.6 steps/min]2025-08-11 19:14:15,587 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 512, 'y': 225}, {'x': 514, 'y': 225}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 512, 'y': 225}, {'x': 514, 'y': 225}]})\n",
+ "2025-08-11 19:14:16,280 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 918, 'y': 379}, {'x': 984, 'y': 629}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 918, 'y': 379}, {'x': 984, 'y': 629}]})\n",
+ " 86%|██████████████████████████████████------| 6300/7340 [227:58<37:37, 27.6 steps/min]2025-08-11 19:14:16,896 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:14:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:17,586 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:14:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:14:20,244 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f\\x7f'})\n",
+ " 86%|██████████████████████████████████------| 6302/7340 [228:01<37:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:14:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6303/7340 [228:03<37:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:22,246 - agent.ComputerAgent - INFO - Computer: click({'x': 439, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 439, 'y': 136})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:22,925 - agent.ComputerAgent - INFO - Computer: click({'x': 536, 'y': 428})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 536, 'y': 428})\n",
+ "\u001b[92m19:14:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:23,567 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 133})\n",
+ " 86%|██████████████████████████████████------| 6303/7340 [228:05<37:31, 27.6 steps/min]2025-08-11 19:14:24,244 - agent.ComputerAgent - INFO - Computer: click({'x': 638, 'y': 559})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 638, 'y': 559})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6306/7340 [228:06<37:24, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:26,880 - agent.ComputerAgent - INFO - Computer: type({'text': 'Favorites'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Favorites'})\n",
+ "\u001b[92m19:14:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6307/7340 [228:08<37:22, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:27,532 - agent.ComputerAgent - INFO - Computer: click({'x': 339, 'y': 350})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 339, 'y': 350})\n",
+ "\u001b[92m19:14:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:28,235 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 162})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6308/7340 [228:09<37:19, 27.6 steps/min]2025-08-11 19:14:28,906 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:14:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:29,557 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:14:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d7a2dbd6-fb11-46b4-a9d7-3dc4c72be953/close \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6318/7340 [228:12<36:54, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:14:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/18debd9e-6c58-4504-8a04-13cba683a254/reset \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6318/7340 [228:13<36:55, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:32,282 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 244})\n",
+ "\u001b[92m19:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:14:32,929 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:14:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:33,608 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:14:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:34,281 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "2025-08-11 19:14:34,918 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:14:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6318/7340 [228:16<36:55, 27.7 steps/min]2025-08-11 19:14:35,595 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:14:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:36,285 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:14:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:36,979 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:14:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:38,055 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:14:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:19<36:51, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:38,765 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:14:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:39,416 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:14:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:21<36:51, 27.7 steps/min]2025-08-11 19:14:40,098 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:14:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:22<36:51, 27.7 steps/min]2025-08-11 19:14:41,569 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:14:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:23<36:51, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:24<36:51, 27.7 steps/min]2025-08-11 19:14:43,207 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:14:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:14:44,505 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+j'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+j'})\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:26<36:52, 27.7 steps/min]2025-08-11 19:14:45,166 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:14:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:45,878 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:14:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:14:46,547 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:14:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:28<36:52, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:29<36:52, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<36:52, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6320/7340 [228:31<36:52, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.66s/it]\u001b[92m19:14:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]2025-08-11 19:14:53,235 - agent.ComputerAgent - INFO - Agent: The email account anonym-x2024@outlook.com has been removed from Thunderbird, including all associated message data. Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The email account anonym-x2024@outlook.com has been removed from Thunderbird, including all associated message data. Task completed\n",
+ "2025-08-11 19:14:54,083 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 482\n",
+ " - prompt_tokens: 10803\n",
+ " - total_tokens: 11285\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0183\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 482\n",
+ " - prompt_tokens: 10803\n",
+ " - total_tokens: 11285\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0183\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6321/7340 [228:37<36:51, 27.6 steps/min]\u001b[92m19:14:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:56,984 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/liangjs333'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/liangjs333'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:14:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 86%|██████████████████████████████████------| 6321/7340 [228:40<36:51, 27.6 steps/min]\u001b[92m19:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:14:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:14:59,552 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 33})\n",
+ "\u001b[92m19:14:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:00,239 - agent.ComputerAgent - INFO - Computer: click({'x': 642, 'y': 470})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 642, 'y': 470})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:00,869 - agent.ComputerAgent - INFO - Computer: click({'x': 296, 'y': 61})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 296, 'y': 61})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:15:02,821 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:15:02,822 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 324})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6322/7340 [228:45<36:50, 27.6 steps/min]\u001b[92m19:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:04,217 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 760})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:04,848 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 253})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 253})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:15:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:06,211 - agent.ComputerAgent - INFO - Computer: click({'x': 677, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 677, 'y': 59})\n",
+ "\u001b[92m19:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:07,558 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+h'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6326/7340 [228:50<36:40, 27.6 steps/min]\u001b[92m19:15:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:15:09,610 - agent.ComputerAgent - INFO - Computer: click({'x': 698, 'y': 705})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 698, 'y': 705})\n",
+ "2025-08-11 19:15:10,230 - agent.ComputerAgent - INFO - Computer: click({'x': 983, 'y': 218})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 983, 'y': 218})\n",
+ "\u001b[92m19:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:10,903 - agent.ComputerAgent - INFO - Computer: click({'x': 915, 'y': 353})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 915, 'y': 353})\n",
+ "\u001b[92m19:15:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 86%|██████████████████████████████████------| 6329/7340 [228:52<36:33, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:11,604 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 118})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 118})\n",
+ "\u001b[92m19:15:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:15:12,249 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ "\u001b[92m19:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6332/7340 [228:53<36:26, 27.7 steps/min]2025-08-11 19:15:12,941 - agent.ComputerAgent - INFO - Computer: drag({'start_element_description': \"start of line 9 '# TODO: Replace the value at arr['\", 'end_element_description': \"end of line 10 '...with the value at arr[j]'\", 'x': 476, 'y': 107})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'start_element_description': \"start of line 9 '# TODO: Replace the value at arr['\", 'end_element_description': \"end of line 10 '...with the value at arr[j]'\", 'x': 476, 'y': 107})\n",
+ "\u001b[92m19:15:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:15:13,572 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 996, 'y': 66}, {'x': 987, 'y': 463}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 996, 'y': 66}, {'x': 987, 'y': 463}]})\n",
+ " 86%|██████████████████████████████████------| 6336/7340 [228:56<36:16, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:15:15,250 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:15:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/09c933ad-61bf-4498-b248-0df86e3aea78/reset \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6336/7340 [228:57<36:16, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6342/7340 [228:58<36:01, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:17,797 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:15:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/3beb69a5-1b07-41f0-b3d9-0e3329eca1d2/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 86%|██████████████████████████████████------| 6342/7340 [228:59<36:02, 27.7 steps/min]2025-08-11 19:15:18,451 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:15:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6352/7340 [229:00<35:37, 27.7 steps/min]2025-08-11 19:15:19,758 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:15:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:20,451 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:15:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:21,139 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:21,817 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:15:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6352/7340 [229:03<35:37, 27.7 steps/min]2025-08-11 19:15:22,469 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:15:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:23,160 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:23,849 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:15:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6352/7340 [229:05<35:38, 27.7 steps/min]2025-08-11 19:15:24,529 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:15:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:25,208 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:15:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6352/7340 [229:06<35:38, 27.7 steps/min]2025-08-11 19:15:25,889 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:15:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/80d19a15-b1ca-43cc-8d1b-1f86242172b5/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:27,219 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:15:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6352/7340 [229:09<35:38, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 19:15:28,439 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:15:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6353/7340 [229:10<35:36, 27.7 steps/min]2025-08-11 19:15:29,131 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:15:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:30,486 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6353/7340 [229:12<35:36, 27.7 steps/min]\u001b[92m19:15:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6355/7340 [229:14<35:31, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7d6056be-d509-451e-bd61-5e62f1bcb990/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<35:31, 27.7 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6355/7340 [229:16<35:32, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.67s/it]2025-08-11 19:15:36,588 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x01'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x01'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6355/7340 [229:18<35:32, 27.7 steps/min]2025-08-11 19:15:37,212 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:15:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]27.7 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<35:30, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:22<35:30, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:23<35:30, 27.7 steps/min]\u001b[92m19:15:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:42,881 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:15:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.82s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:26<35:31, 27.7 steps/min]\u001b[92m19:15:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.72s/it]\u001b[92m19:15:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.33s/it]27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.50s/it]\n",
+ "\u001b[92m19:15:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:29<35:31, 27.7 steps/min]\u001b[92m19:15:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:30<35:31, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:50,395 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n\\n'})\n",
+ " 87%|██████████████████████████████████------| 6356/7340 [229:32<35:32, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6357/7340 [229:33<35:29, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6357/7340 [229:35<35:30, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/reset \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6357/7340 [229:37<35:30, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:15:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:57,991 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ " 87%|██████████████████████████████████------| 6357/7340 [229:39<35:30, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:15:58,660 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:15:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:15:59,687 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:15:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6358/7340 [229:41<35:28, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6358/7340 [229:42<35:28, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:16:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:01,798 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 760})\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:44<35:26, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:45<35:26, 27.7 steps/min]2025-08-11 19:16:04,970 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:16:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:47<35:27, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:48<35:27, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:08,339 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:16:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:50<35:27, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:58<35:28, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [229:59<35:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [230:00<35:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [230:01<35:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:21,297 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 473})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 473})\n",
+ " 87%|██████████████████████████████████------| 6359/7340 [230:03<35:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/reset \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6360/7340 [230:04<35:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:16:24,285 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x1a'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x1a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6360/7340 [230:06<35:27, 27.6 steps/min]2025-08-11 19:16:24,927 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:16:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:08<35:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/77892268-14f2-4dfa-b58c-6a682f258679/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:27,731 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:16:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:09<35:25, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/19680590-12ce-4590-9ac7-a2966cf205f3/reset \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:10<35:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:16:29,911 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:16:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:11<35:25, 27.6 steps/min]2025-08-11 19:16:30,551 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:16:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:16:31,182 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:16:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:12<35:25, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:16<35:26, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:16:36,561 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:16:36,563 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+tab'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+tab'})\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:18<35:26, 27.6 steps/min]\u001b[92m19:16:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:37,235 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 62})\n",
+ "2025-08-11 19:16:37,922 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:16:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 87%|██████████████████████████████████------| 6361/7340 [230:20<35:27, 27.6 steps/min]\u001b[92m19:16:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:16:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:39,766 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:16:39,767 - agent.ComputerAgent - INFO - Computer: click({'x': 611, 'y': 129})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 611, 'y': 129})\n",
+ " 87%|██████████████████████████████████------| 6362/7340 [230:21<35:24, 27.6 steps/min]\u001b[92m19:16:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:40,425 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ " 87%|██████████████████████████████████------| 6364/7340 [230:23<35:20, 27.6 steps/min]\u001b[92m19:16:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:42,631 - agent.ComputerAgent - INFO - Computer: click({'x': 295, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 295, 'y': 105})\n",
+ " 87%|██████████████████████████████████------| 6365/7340 [230:25<35:17, 27.6 steps/min]\u001b[92m19:16:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:44,653 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:16:45,282 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:16:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:46,589 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:16:46,590 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+e'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+e'})\n",
+ " 87%|██████████████████████████████████------| 6365/7340 [230:28<35:18, 27.6 steps/min]\u001b[92m19:16:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:47,966 - agent.ComputerAgent - INFO - Computer: click({'x': 904, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 904, 'y': 223})\n",
+ "2025-08-11 19:16:48,622 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:16:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:16:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6366/7340 [230:30<35:16, 27.6 steps/min]\u001b[92m19:16:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:49,282 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:16:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:16:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:49,972 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:16:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:16:50,674 - agent.ComputerAgent - INFO - Computer: click({'x': 568, 'y': 247})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 568, 'y': 247})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:52,012 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ " 87%|██████████████████████████████████------| 6367/7340 [230:33<35:14, 27.6 steps/min]\u001b[92m19:16:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:53,050 - agent.ComputerAgent - INFO - Computer: click({'x': 105, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 105, 'y': 158})\n",
+ "\u001b[92m19:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6369/7340 [230:34<35:09, 27.6 steps/min]\u001b[92m19:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:53,722 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:16:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6370/7340 [230:35<35:06, 27.6 steps/min]\u001b[92m19:16:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:54,948 - agent.ComputerAgent - INFO - Computer: click({'x': 399, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 399, 'y': 101})\n",
+ " 87%|██████████████████████████████████------| 6370/7340 [230:36<35:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:56,612 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:16:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6371/7340 [230:38<35:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:16:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:16:57,291 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:16:57,291 - agent.ComputerAgent - INFO - Computer: click({'x': 410, 'y': 246})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 410, 'y': 246})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:16:58,622 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "\u001b[92m19:16:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:16:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:00,613 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x7f'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x7f'})\n",
+ " 87%|██████████████████████████████████------| 6371/7340 [230:42<35:05, 27.6 steps/min]2025-08-11 19:17:01,300 - agent.ComputerAgent - INFO - Computer: click({'x': 327, 'y': 279})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 327, 'y': 279})\n",
+ "\u001b[92m19:17:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:01,922 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:17:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:02,592 - agent.ComputerAgent - INFO - Computer: click({'x': 961, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 961, 'y': 760})\n",
+ " 87%|██████████████████████████████████------| 6374/7340 [230:44<34:58, 27.6 steps/min]\u001b[92m19:17:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:03,226 - agent.ComputerAgent - INFO - Computer: click({'x': 890, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 890, 'y': 616})\n",
+ "2025-08-11 19:17:03,867 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:17:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6376/7340 [230:45<34:53, 27.6 steps/min]\u001b[92m19:17:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:04,911 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:17:04,911 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 10})\n",
+ "2025-08-11 19:17:05,567 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:17:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:06,885 - agent.ComputerAgent - INFO - Computer: click({'x': 960, 'y': 349, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 960, 'y': 349, 'button': 'left'})\n",
+ " 87%|██████████████████████████████████------| 6377/7340 [230:48<34:51, 27.6 steps/min]\u001b[92m19:17:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:07,572 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:17:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:17:08,274 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 704, 'y': 705}, {'x': 705, 'y': 702}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 704, 'y': 705}, {'x': 705, 'y': 702}]})\n",
+ " 87%|██████████████████████████████████------| 6379/7340 [230:50<34:46, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:09,709 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:17:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:11,066 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 87%|██████████████████████████████████------| 6380/7340 [230:52<34:44, 27.6 steps/min]2025-08-11 19:17:11,749 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 16, 'y': 335}, {'x': 986, 'y': 640}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 16, 'y': 335}, {'x': 986, 'y': 640}]})\n",
+ "\u001b[92m19:17:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:13,786 - agent.ComputerAgent - INFO - Computer: click({'x': 878, 'y': 261})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 878, 'y': 261})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6381/7340 [230:55<34:42, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:14,445 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:17:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:17:15,110 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:17:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:15,803 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:17:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:17:16,499 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:17:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6383/7340 [230:58<34:37, 27.6 steps/min]\u001b[92m19:17:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:17,819 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 238})\n",
+ "\u001b[92m19:17:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:17:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6383/7340 [231:00<34:38, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:19,194 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:17:19,195 - agent.ComputerAgent - INFO - Computer: click({'x': 528, 'y': 456})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 528, 'y': 456})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:19,848 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:17:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6384/7340 [231:01<34:35, 27.6 steps/min]2025-08-11 19:17:20,525 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 628, 'scroll_x': 0, 'x': 512, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 628, 'scroll_x': 0, 'x': 512, 'y': 244})\n",
+ "2025-08-11 19:17:21,200 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:17:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6385/7340 [231:03<34:33, 27.6 steps/min]\u001b[92m19:17:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:22,569 - agent.ComputerAgent - INFO - Computer: click({'x': 930, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 930, 'y': 346})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:23,280 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:17:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6386/7340 [231:05<34:31, 27.6 steps/min]2025-08-11 19:17:23,971 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:17:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:24,637 - agent.ComputerAgent - INFO - Computer: click({'x': 329, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 329, 'y': 165})\n",
+ " 87%|██████████████████████████████████------| 6387/7340 [231:06<34:28, 27.6 steps/min]2025-08-11 19:17:25,325 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:17:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:26,022 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:17:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:07<34:26, 27.6 steps/min]2025-08-11 19:17:26,672 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:17:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/34262f07-e5d2-47b9-913e-3f44032d779c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:09<34:26, 27.6 steps/min]2025-08-11 19:17:28,032 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:17:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:17:28,694 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:17:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:10<34:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:29,882 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:17:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:11<34:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:17:31,245 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:12<34:27, 27.6 steps/min]2025-08-11 19:17:31,941 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:17:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:14<34:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.63s/it]\u001b[92m19:17:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:16<34:27, 27.6 steps/min]\u001b[92m19:17:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:35,330 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:17:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:18<34:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m19:17:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:38,989 - agent.ComputerAgent - INFO - Computer: type({'text': 'happy-extension v0.0.1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'happy-extension v0.0.1'})\n",
+ " 87%|██████████████████████████████████------| 6388/7340 [231:20<34:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:17:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:42,275 - agent.ComputerAgent - INFO - Computer: click({'x': 503, 'y': 296})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 503, 'y': 296})\n",
+ " 87%|██████████████████████████████████------| 6389/7340 [231:24<34:26, 27.6 steps/min]\u001b[92m19:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:17:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:43,595 - agent.ComputerAgent - INFO - Computer: click({'x': 737, 'y': 402})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 737, 'y': 402})\n",
+ "2025-08-11 19:17:44,258 - agent.ComputerAgent - INFO - Computer: click({'x': 965, 'y': 514})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 965, 'y': 514})\n",
+ "2025-08-11 19:17:44,906 - agent.ComputerAgent - INFO - Computer: click({'x': 563, 'y': 224})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 563, 'y': 224})\n",
+ "2025-08-11 19:17:45,578 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 312})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 312})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:47,612 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://github.com/liangjs333'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://github.com/liangjs333'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6390/7340 [231:30<34:25, 27.6 steps/min]\u001b[92m19:17:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:49,631 - agent.ComputerAgent - INFO - Computer: click({'x': 683, 'y': 480})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 683, 'y': 480})\n",
+ "2025-08-11 19:17:50,249 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "\u001b[92m19:17:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:51,598 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:53,606 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 429})\n",
+ "2025-08-11 19:17:54,287 - agent.ComputerAgent - INFO - Computer: click({'x': 620, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 620, 'y': 172})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6395/7340 [231:37<34:13, 27.6 steps/min]\u001b[92m19:17:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:56,244 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:17:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:17:56,895 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 59, 'y': 165}, {'x': 211, 'y': 460}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 59, 'y': 165}, {'x': 211, 'y': 460}]})\n",
+ "2025-08-11 19:17:57,570 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 297})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 297})\n",
+ "2025-08-11 19:17:58,220 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 244})\n",
+ "\u001b[92m19:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:17:58,880 - agent.ComputerAgent - INFO - Computer: double_click({'x': 987, 'y': 396})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 987, 'y': 396})\n",
+ " 87%|██████████████████████████████████------| 6399/7340 [231:40<34:04, 27.6 steps/min]\u001b[92m19:17:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:17:59,518 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 148})\n",
+ "\u001b[92m19:17:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:00,203 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 483})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 483})\n",
+ " 87%|██████████████████████████████████------| 6403/7340 [231:41<33:54, 27.6 steps/min]\u001b[92m19:18:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:00,898 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 578, 'y': 335}, {'x': 700, 'y': 684}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 578, 'y': 335}, {'x': 700, 'y': 684}]})\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:44<33:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:18:04,124 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:45<33:47, 27.6 steps/min]2025-08-11 19:18:04,773 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:18:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:18:05,413 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:18:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:47<33:47, 27.6 steps/min]2025-08-11 19:18:06,094 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:18:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:18:06,737 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:18:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:48<33:47, 27.6 steps/min]2025-08-11 19:18:07,422 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:18:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:08,094 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:18:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:49<33:48, 27.6 steps/min]2025-08-11 19:18:08,783 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:18:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:09,474 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:18:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:10,153 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:18:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:10,810 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:18:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:11,495 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:18:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:53<33:48, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:12,835 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:18:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:14,133 - agent.ComputerAgent - INFO - Computer: type({'text': 'New York City'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'New York City'})\n",
+ "2025-08-11 19:18:14,759 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:18:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6406/7340 [231:57<33:49, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:16,843 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 435})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 435})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:18,199 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+t'})\n",
+ "2025-08-11 19:18:18,866 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:18:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:19,491 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:18:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6407/7340 [232:01<33:47, 27.6 steps/min]\u001b[92m19:18:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:20,171 - agent.ComputerAgent - INFO - Computer: click({'x': 971, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 971, 'y': 101})\n",
+ "2025-08-11 19:18:20,882 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:18:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6408/7340 [232:03<33:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:22,889 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+f'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+f'})\n",
+ "\u001b[92m19:18:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6409/7340 [232:04<33:42, 27.6 steps/min]2025-08-11 19:18:23,580 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 642})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 642})\n",
+ "2025-08-11 19:18:24,274 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:18:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:18:25,678 - agent.ComputerAgent - INFO - Computer: click({'x': 957, 'y': 330, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 957, 'y': 330, 'button': 'left'})\n",
+ " 87%|██████████████████████████████████------| 6409/7340 [232:07<33:43, 27.6 steps/min]2025-08-11 19:18:27,115 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:18:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6411/7340 [232:08<33:38, 27.6 steps/min]2025-08-11 19:18:28,845 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:18:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 87%|██████████████████████████████████------| 6411/7340 [232:11<33:38, 27.6 steps/min]\u001b[92m19:18:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:31,131 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6411/7340 [232:12<33:38, 27.6 steps/min]2025-08-11 19:18:31,804 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:18:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:33,862 - agent.ComputerAgent - INFO - Computer: type({'text': 'natural products'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'natural products'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6412/7340 [232:15<33:36, 27.6 steps/min]2025-08-11 19:18:34,516 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:18:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:35,207 - agent.ComputerAgent - INFO - Computer: click({'x': 65, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 65, 'y': 283})\n",
+ "2025-08-11 19:18:35,844 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:18:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 87%|██████████████████████████████████------| 6413/7340 [232:18<33:34, 27.6 steps/min]\u001b[92m19:18:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:18:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6414/7340 [232:19<33:32, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:38,594 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:39,280 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 257})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 257})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:40,655 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6414/7340 [232:23<33:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:41,970 - agent.ComputerAgent - INFO - Computer: click({'x': 181, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 181, 'y': 33})\n",
+ "\u001b[92m19:18:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:43,925 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:18:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:44,603 - agent.ComputerAgent - INFO - Computer: click({'x': 669, 'y': 297})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 669, 'y': 297})\n",
+ "\u001b[92m19:18:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6416/7340 [232:26<33:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:18:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:45,951 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 436})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:47,994 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "\u001b[92m19:18:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6418/7340 [232:29<33:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:48,641 - agent.ComputerAgent - INFO - Computer: click({'x': 919, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 919, 'y': 345})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:18:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:50,007 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 90})\n",
+ "\u001b[92m19:18:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 87%|██████████████████████████████████------| 6420/7340 [232:31<33:19, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:18:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:51,365 - agent.ComputerAgent - INFO - Computer: click({'x': 110, 'y': 158})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 110, 'y': 158})\n",
+ "2025-08-11 19:18:52,016 - agent.ComputerAgent - INFO - Computer: click({'x': 407, 'y': 397})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 407, 'y': 397})\n",
+ "\u001b[92m19:18:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 87%|██████████████████████████████████------| 6422/7340 [232:34<33:14, 27.6 steps/min]\u001b[92m19:18:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:53,379 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:18:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:18:54,061 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "\u001b[92m19:18:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6424/7340 [232:36<33:10, 27.6 steps/min]\u001b[92m19:18:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:56,103 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 120, 'scroll_y': 0, 'x': 459, 'y': 736})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 120, 'scroll_y': 0, 'x': 459, 'y': 736})\n",
+ "\u001b[92m19:18:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:57,399 - agent.ComputerAgent - INFO - Computer: type({'text': 'Outlook SMTP'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Outlook SMTP'})\n",
+ " 88%|███████████████████████████████████-----| 6425/7340 [232:39<33:07, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:18:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:18:58,055 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:18:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:18:58,749 - agent.ComputerAgent - INFO - Computer: click({'x': 339, 'y': 309})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 339, 'y': 309})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ " 88%|███████████████████████████████████-----| 6427/7340 [232:40<33:03, 27.6 steps/min]2025-08-11 19:18:59,385 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:18:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:00,056 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 987, 'y': 425}, {'x': 984, 'y': 577}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 987, 'y': 425}, {'x': 984, 'y': 577}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:42<32:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:02,186 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:19:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:43<32:58, 27.6 steps/min]2025-08-11 19:19:02,841 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:19:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:03,535 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:19:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:04,184 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:19:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:45<32:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:04,845 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:19:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:05,535 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:19:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:06,223 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:19:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:48<32:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:07,575 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:19:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:08,289 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:19:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:50<32:59, 27.6 steps/min]\u001b[92m19:19:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:10,315 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:19:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:10,986 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:19:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:11,663 - agent.ComputerAgent - INFO - Computer: click({'x': 884, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 884, 'y': 616})\n",
+ "2025-08-11 19:19:12,325 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6429/7340 [232:54<33:00, 27.6 steps/min]\u001b[92m19:19:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:13,714 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n\\n'})\n",
+ "2025-08-11 19:19:14,399 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 551})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 551})\n",
+ "2025-08-11 19:19:15,454 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 132})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 132})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6430/7340 [232:57<32:58, 27.6 steps/min]\u001b[92m19:19:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:19:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:19:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:18,205 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:18,897 - agent.ComputerAgent - INFO - Computer: click({'x': 651, 'y': 439})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 651, 'y': 439})\n",
+ " 88%|███████████████████████████████████-----| 6433/7340 [233:00<32:51, 27.6 steps/min]\u001b[92m19:19:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:19:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:20,603 - agent.ComputerAgent - INFO - Computer: move({'x': 221, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 221, 'y': 254})\n",
+ "\u001b[92m19:19:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6434/7340 [233:02<32:48, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:21,275 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 94})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 94})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6435/7340 [233:04<32:46, 27.6 steps/min]2025-08-11 19:19:23,285 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:19:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:23,961 - agent.ComputerAgent - INFO - Computer: click({'x': 190, 'y': 33})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 190, 'y': 33})\n",
+ "\u001b[92m19:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6436/7340 [233:06<32:44, 27.6 steps/min]2025-08-11 19:19:25,287 - agent.ComputerAgent - INFO - Computer: click({'x': 278, 'y': 689})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 278, 'y': 689})\n",
+ "2025-08-11 19:19:25,949 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 390})\n",
+ " 88%|███████████████████████████████████-----| 6437/7340 [233:07<32:42, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:26,635 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:19:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:27,299 - agent.ComputerAgent - INFO - Computer: click({'x': 72, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 72, 'y': 202})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6439/7340 [233:09<32:37, 27.6 steps/min]\u001b[92m19:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:28,655 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:19:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6440/7340 [233:11<32:35, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:30,391 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "\u001b[92m19:19:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:31,437 - agent.ComputerAgent - INFO - Computer: click({'x': 332, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 332, 'y': 277})\n",
+ " 88%|███████████████████████████████████-----| 6440/7340 [233:13<32:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:32,096 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:19:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:32,777 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:19:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6442/7340 [233:15<32:30, 27.6 steps/min]2025-08-11 19:19:34,166 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:19:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:34,841 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:19:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6442/7340 [233:16<32:31, 27.6 steps/min]2025-08-11 19:19:35,535 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:19:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:36,230 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 747})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 747})\n",
+ "2025-08-11 19:19:36,891 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:19:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:38,192 - agent.ComputerAgent - INFO - Computer: click({'x': 933, 'y': 330, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 933, 'y': 330, 'button': 'left'})\n",
+ " 88%|███████████████████████████████████-----| 6442/7340 [233:19<32:31, 27.6 steps/min]2025-08-11 19:19:38,885 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:19:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6444/7340 [233:20<32:26, 27.6 steps/min]2025-08-11 19:19:40,063 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:19:40,756 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:19:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6444/7340 [233:22<32:26, 27.6 steps/min]2025-08-11 19:19:41,807 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:19:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6444/7340 [233:24<32:27, 27.6 steps/min]\u001b[92m19:19:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:43,662 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 162})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 162})\n",
+ " 88%|███████████████████████████████████-----| 6444/7340 [233:25<32:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:44,826 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:19:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9f99ab11-d23e-4652-b198-c88ed8fc84f6/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6445/7340 [233:27<32:25, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:19:46,573 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:19:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:19:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:19:48,538 - agent.ComputerAgent - INFO - Computer: type({'text': '0.0.1'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '0.0.1'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6445/7340 [233:30<32:25, 27.6 steps/min]\u001b[92m19:19:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:49,791 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:19:49,791 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 43})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 43})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:19:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]2025-08-11 19:19:51,876 - agent.ComputerAgent - INFO - Computer: type({'text': '20 cm'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '20 cm'})\n",
+ " 88%|███████████████████████████████████-----| 6446/7340 [233:33<32:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.63s/it]2025-08-11 19:19:54,001 - agent.ComputerAgent - INFO - Agent: Created a new folder named \"Favorites\" on the bookmarks bar in Chrome. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Created a new folder named \"Favorites\" on the bookmarks bar in Chrome. Task completed.\n",
+ "2025-08-11 19:19:54,866 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 347\n",
+ " - prompt_tokens: 14886\n",
+ " - total_tokens: 15233\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0221\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.59s/it]INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 347\n",
+ " - prompt_tokens: 14886\n",
+ " - total_tokens: 15233\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0221\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6449/7340 [233:38<32:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:19:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:19:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:19:58,296 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6452/7340 [233:41<32:09, 27.6 steps/min]\u001b[92m19:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:19:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:00,327 - agent.ComputerAgent - INFO - Computer: click({'x': 631, 'y': 327})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 631, 'y': 327})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:01,343 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 405})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 405})\n",
+ "\u001b[92m19:20:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:01,993 - agent.ComputerAgent - INFO - Computer: move({'x': 314, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 314, 'y': 254})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:02,668 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:03,330 - agent.ComputerAgent - INFO - Computer: click({'x': 969, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 969, 'y': 760})\n",
+ "\u001b[92m19:20:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6452/7340 [233:45<32:10, 27.6 steps/min]\u001b[92m19:20:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:05,380 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "2025-08-11 19:20:06,063 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 275})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 275})\n",
+ "2025-08-11 19:20:06,751 - agent.ComputerAgent - INFO - Computer: click({'x': 291, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 291, 'y': 153})\n",
+ "\u001b[92m19:20:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:07,445 - agent.ComputerAgent - INFO - Computer: click({'x': 919, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 919, 'y': 346})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:08,144 - agent.ComputerAgent - INFO - Computer: click({'x': 193, 'y': 658})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 193, 'y': 658})\n",
+ " 88%|███████████████████████████████████-----| 6456/7340 [233:49<32:01, 27.6 steps/min]\u001b[92m19:20:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:08,813 - agent.ComputerAgent - INFO - Computer: click({'x': 78, 'y': 122})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 78, 'y': 122})\n",
+ "2025-08-11 19:20:09,477 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:20:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:20:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6461/7340 [233:51<31:49, 27.6 steps/min]\u001b[92m19:20:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:10,888 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 987, 'y': 378}, {'x': 987, 'y': 548}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 987, 'y': 378}, {'x': 987, 'y': 548}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/96b656db-b210-453a-9230-f958f621d7b6/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6462/7340 [233:53<31:46, 27.6 steps/min]\u001b[92m19:20:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:12,220 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 439})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 439})\n",
+ " 88%|███████████████████████████████████-----| 6463/7340 [233:54<31:44, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:13,877 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:20:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6464/7340 [233:55<31:42, 27.6 steps/min]2025-08-11 19:20:14,559 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:20:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:15,222 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:15,887 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:20:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6468/7340 [233:57<31:32, 27.6 steps/min]2025-08-11 19:20:16,566 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:20:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:17,265 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:17,898 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:20:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6468/7340 [234:00<31:32, 27.6 steps/min]\u001b[92m19:20:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:19,244 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:20:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:19,967 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:20:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<31:33, 27.6 steps/min]2025-08-11 19:20:20,626 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:20:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2960baac-68ef-44af-8d6c-fe4b45263791/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]27.6 steps/min]2025-08-11 19:20:22,086 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:20:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:22,777 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:20:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6468/7340 [234:04<31:33, 27.6 steps/min]2025-08-11 19:20:23,696 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.78s/it]INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:20:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:24,377 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:20:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.73s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.44s/it]\n",
+ "\u001b[92m19:20:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:28,656 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6468/7340 [234:11<31:34, 27.6 steps/min]\u001b[92m19:20:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:20:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:30,635 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 244})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]2025-08-11 19:20:32,075 - agent.ComputerAgent - INFO - Computer: type({'text': 'paper01.pdf'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'paper01.pdf'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "2025-08-11 19:20:35,791 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6471/7340 [234:18<31:27, 27.6 steps/min]\u001b[92m19:20:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:37,722 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'shift+b'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'shift+b'})\n",
+ "2025-08-11 19:20:38,356 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:20:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:20:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6471/7340 [234:22<31:28, 27.6 steps/min]2025-08-11 19:20:41,041 - agent.ComputerAgent - INFO - Computer: double_click({'x': 166, 'y': 378})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 166, 'y': 378})\n",
+ "2025-08-11 19:20:41,702 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "2025-08-11 19:20:42,370 - agent.ComputerAgent - INFO - Computer: click({'x': 520, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 520, 'y': 437})\n",
+ "\u001b[92m19:20:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:20:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6471/7340 [234:24<31:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:43,696 - agent.ComputerAgent - INFO - Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 925, 'y': 243})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:20:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:45,015 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:20:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:45,667 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:20:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:46,357 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 617})\n",
+ "\u001b[92m19:20:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 88%|███████████████████████████████████-----| 6474/7340 [234:28<31:21, 27.6 steps/min]\u001b[92m19:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:47,739 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 256})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 256})\n",
+ "\u001b[92m19:20:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:20:48,442 - agent.ComputerAgent - INFO - Computer: click({'x': 449, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 449, 'y': 351})\n",
+ "2025-08-11 19:20:49,116 - agent.ComputerAgent - INFO - Computer: click({'x': 270, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 270, 'y': 298})\n",
+ "2025-08-11 19:20:49,771 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 515, 'scroll_x': 0, 'x': 212, 'y': 54})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 515, 'scroll_x': 0, 'x': 212, 'y': 54})\n",
+ "\u001b[92m19:20:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:50,448 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:20:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6476/7340 [234:32<31:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:51,143 - agent.ComputerAgent - INFO - Computer: click({'x': 528, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 528, 'y': 34})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:20:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6480/7340 [234:33<31:07, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:20:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:20:53,024 - agent.ComputerAgent - INFO - Computer: double_click({'x': 926, 'y': 561})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 926, 'y': 561})\n",
+ " 88%|███████████████████████████████████-----| 6481/7340 [234:34<31:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:54,805 - agent.ComputerAgent - INFO - Computer: type({'text': 'smtp.office365.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'smtp.office365.com'})\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:37<31:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:56,978 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:20:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:38<31:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:57,670 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:20:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:20:58,358 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:20:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:40<31:01, 27.6 steps/min]2025-08-11 19:20:59,046 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:20:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:20:59,726 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:20:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:41<31:01, 27.6 steps/min]2025-08-11 19:21:00,378 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:21:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:01,076 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:21:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:43<31:01, 27.6 steps/min]2025-08-11 19:21:01,782 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:21:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:02,457 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:21:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6483/7340 [234:44<31:01, 27.6 steps/min]2025-08-11 19:21:03,139 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:21:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:05,093 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ " 88%|███████████████████████████████████-----| 6484/7340 [234:46<30:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:06,434 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+s'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+s'})\n",
+ "\u001b[92m19:21:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:07,770 - agent.ComputerAgent - INFO - Computer: type({'text': '1.0.0'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '1.0.0'})\n",
+ " 88%|███████████████████████████████████-----| 6485/7340 [234:49<30:57, 27.6 steps/min]2025-08-11 19:21:08,468 - agent.ComputerAgent - INFO - Computer: click({'x': 690, 'y': 358})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 690, 'y': 358})\n",
+ "2025-08-11 19:21:09,109 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:21:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6486/7340 [234:50<30:55, 27.6 steps/min]2025-08-11 19:21:09,787 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:21:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:10,460 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:21:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6487/7340 [234:52<30:53, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:12,653 - agent.ComputerAgent - INFO - Computer: click({'x': 975, 'y': 379, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 975, 'y': 379, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b641dbb7-3e3c-437d-bc11-5e038171855d/close \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [234:55<30:50, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [234:56<30:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:15,985 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:21:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [234:57<30:51, 27.6 steps/min]2025-08-11 19:21:16,669 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:21:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:21:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [234:59<30:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:21:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [235:00<30:51, 27.6 steps/min]2025-08-11 19:21:19,407 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:21:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:20,097 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:21:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.81s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:21,445 - agent.ComputerAgent - INFO - Computer: type({'text': '*.mp4'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '*.mp4'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.70s/it]2025-08-11 19:21:22,949 - agent.ComputerAgent - INFO - Computer: click({'x': 892, 'y': 618, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 892, 'y': 618, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6488/7340 [235:05<30:52, 27.6 steps/min]\u001b[92m19:21:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]\n",
+ "\u001b[92m19:21:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6490/7340 [235:06<30:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6490/7340 [235:07<30:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:21:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:27,245 - agent.ComputerAgent - INFO - Computer: click({'x': 731, 'y': 276})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 731, 'y': 276})\n",
+ " 88%|███████████████████████████████████-----| 6490/7340 [235:08<30:47, 27.6 steps/min]\u001b[92m19:21:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:27,924 - agent.ComputerAgent - INFO - Computer: click({'x': 204, 'y': 312})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 204, 'y': 312})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:21:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6491/7340 [235:10<30:45, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:29,245 - agent.ComputerAgent - INFO - Computer: click({'x': 397, 'y': 459})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 397, 'y': 459})\n",
+ "\u001b[92m19:21:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:29,922 - agent.ComputerAgent - INFO - Computer: click({'x': 935, 'y': 346})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 935, 'y': 346})\n",
+ "\u001b[92m19:21:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 88%|███████████████████████████████████-----| 6492/7340 [235:11<30:43, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:30,552 - agent.ComputerAgent - INFO - Computer: click({'x': 646, 'y': 436})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 646, 'y': 436})\n",
+ "\u001b[92m19:21:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:31,223 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 131})\n",
+ "\u001b[92m19:21:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 88%|███████████████████████████████████-----| 6494/7340 [235:12<30:38, 27.6 steps/min]2025-08-11 19:21:32,252 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 298})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:34,287 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n\\n'})\n",
+ " 89%|███████████████████████████████████-----| 6496/7340 [235:16<30:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:34,949 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:21:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:35,596 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:21:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:21:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6498/7340 [235:17<30:29, 27.6 steps/min]2025-08-11 19:21:36,293 - agent.ComputerAgent - INFO - Computer: click({'x': 379, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 379, 'y': 254})\n",
+ "2025-08-11 19:21:36,939 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:21:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6498/7340 [235:18<30:29, 27.6 steps/min]2025-08-11 19:21:37,588 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:21:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6499/7340 [235:20<30:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:38,927 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:21:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:39,569 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:21:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:21:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:21:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6499/7340 [235:21<30:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:41,242 - agent.ComputerAgent - INFO - Computer: click({'x': 121, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 121, 'y': 232})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:21:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6499/7340 [235:24<30:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:43,300 - agent.ComputerAgent - INFO - Computer: click({'x': 674, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 674, 'y': 351})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:44,640 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:21:44,640 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+alt+t'})\n",
+ "\u001b[92m19:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:21:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6500/7340 [235:26<30:25, 27.6 steps/min]2025-08-11 19:21:45,319 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:21:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:45,968 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 244})\n",
+ "2025-08-11 19:21:46,615 - agent.ComputerAgent - INFO - Computer: click({'x': 402, 'y': 349})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 402, 'y': 349})\n",
+ "2025-08-11 19:21:47,282 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:21:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:47,959 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:21:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6501/7340 [235:29<30:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:48,609 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:21:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6503/7340 [235:31<30:18, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:49,927 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:21:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:50,579 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6503/7340 [235:32<30:18, 27.6 steps/min]\u001b[92m19:21:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:51,258 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:21:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:21:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6503/7340 [235:33<30:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:21:52,569 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:21:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:21:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:53,241 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 987, 'y': 209}, {'x': 984, 'y': 462}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 987, 'y': 209}, {'x': 984, 'y': 462}]})\n",
+ " 89%|███████████████████████████████████-----| 6503/7340 [235:34<30:19, 27.6 steps/min]\u001b[92m19:21:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:21:53,919 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 901, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 901, 'y': 616})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:54,550 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:21:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:55,889 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'super'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'super'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6504/7340 [235:37<30:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:21:56,551 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:21:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:21:57,619 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:21:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6506/7340 [235:39<30:12, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:21:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6506/7340 [235:41<30:12, 27.6 steps/min]\u001b[92m19:22:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:22:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:01,150 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 433})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6506/7340 [235:43<30:13, 27.6 steps/min]\u001b[92m19:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:02,540 - agent.ComputerAgent - INFO - Computer: click({'x': 687, 'y': 59})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 687, 'y': 59})\n",
+ "\u001b[92m19:22:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:03,185 - agent.ComputerAgent - INFO - Computer: click({'x': 133, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 133, 'y': 89})\n",
+ " 89%|███████████████████████████████████-----| 6507/7340 [235:44<30:10, 27.6 steps/min]2025-08-11 19:22:03,840 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:22:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6509/7340 [235:48<30:06, 27.6 steps/min]\u001b[92m19:22:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:07,260 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:22:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:07,960 - agent.ComputerAgent - INFO - Computer: click({'x': 274, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 274, 'y': 298})\n",
+ "\u001b[92m19:22:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6509/7340 [235:49<30:06, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:08,641 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 387})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 387})\n",
+ "2025-08-11 19:22:09,338 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 429})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:10,719 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 89%|███████████████████████████████████-----| 6510/7340 [235:52<30:04, 27.6 steps/min]2025-08-11 19:22:11,396 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6513/7340 [235:53<29:57, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:12,729 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:22:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:22:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:13,372 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:22:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:22:14,040 - agent.ComputerAgent - INFO - Computer: click({'x': 472, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 472, 'y': 205})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:15,398 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 334, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 334, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6514/7340 [235:57<29:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:17,425 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:19,429 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:22:19,430 - agent.ComputerAgent - INFO - Agent: I’m ready to enhance it, but I don’t have the photo file to work on. Please upload the image (or provide a downloadable link) and tell me the maximum allowed file size so I can upscale it and recompress it to stay at or below that size.\n",
+ "\n",
+ "Once you provide the file, I will:\n",
+ "- Upscale using an AI model to increase resolution and detail.\n",
+ "- Denoise/sharpen as needed to avoid artifacts.\n",
+ "- Recompress (JPEG/WebP/PNG-optimized) to keep the final file size no larger than the original.\n",
+ "\n",
+ "Awaiting the image to proceed.\n",
+ "INFO:agent.ComputerAgent:Agent: I’m ready to enhance it, but I don’t have the photo file to work on. Please upload the image (or provide a downloadable link) and tell me the maximum allowed file size so I can upscale it and recompress it to stay at or below that size.\n",
+ "\n",
+ "Once you provide the file, I will:\n",
+ "- Upscale using an AI model to increase resolution and detail.\n",
+ "- Denoise/sharpen as needed to avoid artifacts.\n",
+ "- Recompress (JPEG/WebP/PNG-optimized) to keep the final file size no larger than the original.\n",
+ "\n",
+ "Awaiting the image to proceed.\n",
+ "2025-08-11 19:22:20,081 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1538\n",
+ " - prompt_tokens: 1934\n",
+ " - total_tokens: 3472\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1538\n",
+ " - prompt_tokens: 1934\n",
+ " - total_tokens: 3472\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1408\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0178\n",
+ " 89%|███████████████████████████████████-----| 6517/7340 [236:01<29:48, 27.6 steps/min]2025-08-11 19:22:20,740 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:22:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:22:21,375 - agent.ComputerAgent - INFO - Computer: click({'x': 809, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 809, 'y': 143})\n",
+ "\u001b[92m19:22:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:22,027 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 386})\n",
+ " 89%|███████████████████████████████████-----| 6517/7340 [236:03<29:48, 27.6 steps/min]2025-08-11 19:22:22,710 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:22:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:22:23,792 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:22:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ec4c0693-5de3-422a-9ae2-70d6a6759b11/close \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6519/7340 [236:06<29:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:25,831 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:22:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:22:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6519/7340 [236:07<29:44, 27.6 steps/min]2025-08-11 19:22:26,484 - agent.ComputerAgent - INFO - Computer: click({'x': 912, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 912, 'y': 616})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:27,141 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:22:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6519/7340 [236:08<29:44, 27.6 steps/min]2025-08-11 19:22:27,819 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:22:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:28,469 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:22:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6520/7340 [236:10<29:42, 27.6 steps/min]2025-08-11 19:22:29,820 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:22:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:22:30,471 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:22:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:22:31,150 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:22:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6520/7340 [236:13<29:42, 27.6 steps/min]2025-08-11 19:22:32,003 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.86s/it]\u001b[92m19:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:22:32,679 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:22:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.74s/it]27.6 steps/min]2025-08-11 19:22:33,745 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:22:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6520/7340 [236:15<29:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.44s/it]\n",
+ "2025-08-11 19:22:36,017 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'space'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'space'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:37,854 - agent.ComputerAgent - INFO - Computer: type({'text': '\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08Outlook SMTP'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08\\x08Outlook SMTP'})\n",
+ " 89%|███████████████████████████████████-----| 6520/7340 [236:19<29:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/reset \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:38,516 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:22:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:39,198 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 433})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:40,539 - agent.ComputerAgent - INFO - Computer: type({'text': '\\n\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '\\n\\n'})\n",
+ " 89%|███████████████████████████████████-----| 6522/7340 [236:22<29:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:42,543 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+v'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+v'})\n",
+ "2025-08-11 19:22:43,206 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:22:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6524/7340 [236:25<29:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:44,572 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:45,949 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ "\u001b[92m19:22:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:46,617 - agent.ComputerAgent - INFO - Computer: click({'x': 368, 'y': 241})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 368, 'y': 241})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6524/7340 [236:29<29:34, 27.6 steps/min]\u001b[92m19:22:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:48,597 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:22:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:22:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:50,603 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 296}, {'x': 819, 'y': 490}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 296}, {'x': 819, 'y': 490}]})\n",
+ "\u001b[92m19:22:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6525/7340 [236:32<29:32, 27.6 steps/min]\u001b[92m19:22:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:51,257 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 345})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 345})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:51,927 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 51})\n",
+ "\u001b[92m19:22:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:52,605 - agent.ComputerAgent - INFO - Computer: click({'x': 116, 'y': 363})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 116, 'y': 363})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:22:53,977 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 89%|███████████████████████████████████-----| 6526/7340 [236:35<29:30, 27.6 steps/min]2025-08-11 19:22:54,635 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 244})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:22:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:56,608 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ " 89%|███████████████████████████████████-----| 6529/7340 [236:38<29:23, 27.6 steps/min]\u001b[92m19:22:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:57,297 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:22:58,625 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:22:58,626 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 725})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 725})\n",
+ "\u001b[92m19:22:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:22:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6530/7340 [236:41<29:21, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:22:59,994 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 426})\n",
+ "\u001b[92m19:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:00,681 - agent.ComputerAgent - INFO - Computer: click({'x': 651, 'y': 452})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 651, 'y': 452})\n",
+ "\u001b[92m19:23:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6531/7340 [236:42<29:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:01,349 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 571, 'scroll_x': 0, 'x': 75, 'y': 310})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 571, 'scroll_x': 0, 'x': 75, 'y': 310})\n",
+ "2025-08-11 19:23:02,006 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:23:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6533/7340 [236:43<29:14, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:03,597 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:23:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6534/7340 [236:46<29:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:23:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:23:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6534/7340 [236:47<29:12, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:06,273 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ "\u001b[92m19:23:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:06,946 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:23:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:23:07,611 - agent.ComputerAgent - INFO - Computer: click({'x': 300, 'y': 79})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 300, 'y': 79})\n",
+ "\u001b[92m19:23:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6534/7340 [236:49<29:12, 27.6 steps/min]2025-08-11 19:23:08,245 - agent.ComputerAgent - INFO - Computer: double_click({'button': 'left', 'x': 347, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'button': 'left', 'x': 347, 'y': 193})\n",
+ "2025-08-11 19:23:08,916 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:23:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6537/7340 [236:50<29:05, 27.6 steps/min]2025-08-11 19:23:09,617 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:23:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:10,279 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6537/7340 [236:52<29:05, 27.6 steps/min]2025-08-11 19:23:10,957 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:23:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:12,313 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:23:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:23:12,959 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:23:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6537/7340 [236:54<29:06, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:13,974 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:23:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:14,671 - agent.ComputerAgent - INFO - Computer: click({'x': 501, 'y': 432})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 501, 'y': 432})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6537/7340 [236:56<29:06, 27.6 steps/min]2025-08-11 19:23:15,366 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:23:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:16,398 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:23:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6538/7340 [236:58<29:04, 27.6 steps/min]2025-08-11 19:23:17,433 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:23:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/4d0f1943-7dac-45a8-a354-73c43955694a/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6538/7340 [236:59<29:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 19:23:18,751 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:23:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6539/7340 [237:00<29:01, 27.6 steps/min]2025-08-11 19:23:19,420 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:23:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6539/7340 [237:01<29:02, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:21,131 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:23:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6539/7340 [237:06<29:02, 27.6 steps/min]\u001b[92m19:23:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:23:25,130 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:23:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6539/7340 [237:08<29:02, 27.6 steps/min]\u001b[92m19:23:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:28,490 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 843, 'y': 479}, {'x': 997, 'y': 495}]})\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 843, 'y': 479}, {'x': 997, 'y': 495}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6539/7340 [237:10<29:03, 27.6 steps/min]\u001b[92m19:23:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.66s/it]2025-08-11 19:23:30,481 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 320, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 320, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]\n",
+ " 89%|███████████████████████████████████-----| 6541/7340 [237:12<28:58, 27.6 steps/min]\u001b[92m19:23:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 89%|███████████████████████████████████-----| 6542/7340 [237:13<28:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:32,871 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:23:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:23:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:33,573 - agent.ComputerAgent - INFO - Computer: double_click({'x': 459, 'y': 207})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 459, 'y': 207})\n",
+ " 89%|███████████████████████████████████-----| 6542/7340 [237:15<28:56, 27.6 steps/min]\u001b[92m19:23:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:34,611 - agent.ComputerAgent - INFO - Computer: click({'x': 613, 'y': 498})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 613, 'y': 498})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6543/7340 [237:17<28:54, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:35,940 - agent.ComputerAgent - INFO - Computer: click({'x': 101, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 101, 'y': 390})\n",
+ "\u001b[92m19:23:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:23:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:36,574 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:37,261 - agent.ComputerAgent - INFO - Computer: click({'x': 70, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 70, 'y': 148})\n",
+ " 89%|███████████████████████████████████-----| 6544/7340 [237:18<28:52, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:37,934 - agent.ComputerAgent - INFO - Computer: click({'x': 524, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 524, 'y': 123})\n",
+ "\u001b[92m19:23:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:23:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:39,253 - agent.ComputerAgent - INFO - Computer: type({'text': 'https://www.drugs.com/npp/'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'https://www.drugs.com/npp/'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 19:23:39,900 - agent.ComputerAgent - INFO - Computer: click({'x': 398, 'y': 472})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 398, 'y': 472})\n",
+ "2025-08-11 19:23:40,559 - agent.ComputerAgent - INFO - Computer: click({'x': 261, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 261, 'y': 143})\n",
+ "2025-08-11 19:23:41,249 - agent.ComputerAgent - INFO - Computer: click({'x': 841, 'y': 159})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 841, 'y': 159})\n",
+ " 89%|███████████████████████████████████-----| 6548/7340 [237:22<28:42, 27.6 steps/min]2025-08-11 19:23:41,891 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:23:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:23:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:43,948 - agent.ComputerAgent - INFO - Agent: The image has been mirrored horizontally in GIMP and the changes were saved to berry.png without closing the application.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The image has been mirrored horizontally in GIMP and the changes were saved to berry.png without closing the application.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 19:23:44,603 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 353\n",
+ " - prompt_tokens: 9433\n",
+ " - total_tokens: 9786\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0153\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 353\n",
+ " - prompt_tokens: 9433\n",
+ " - total_tokens: 9786\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 320\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0153\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:45,957 - agent.ComputerAgent - INFO - Computer: click({'x': 706, 'y': 676, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 706, 'y': 676, 'button': 'left'})\n",
+ " 89%|███████████████████████████████████-----| 6554/7340 [237:27<28:28, 27.6 steps/min]\u001b[92m19:23:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:47,296 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 563, 'scroll_x': 0, 'x': 442, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 563, 'scroll_x': 0, 'x': 442, 'y': 739})\n",
+ " 89%|███████████████████████████████████-----| 6555/7340 [237:29<28:26, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:23:47,951 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:23:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:23:48,980 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 433})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6556/7340 [237:30<28:24, 27.6 steps/min]2025-08-11 19:23:49,651 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:23:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6557/7340 [237:32<28:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6557/7340 [237:33<28:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:52,861 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:23:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:53,561 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:23:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6557/7340 [237:35<28:22, 27.6 steps/min]2025-08-11 19:23:54,231 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:23:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:23:54,891 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:23:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6557/7340 [237:36<28:22, 27.6 steps/min]2025-08-11 19:23:55,569 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:23:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:23:56,260 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 89%|███████████████████████████████████-----| 6558/7340 [237:38<28:20, 27.6 steps/min]2025-08-11 19:23:56,915 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:23:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:23:58,358 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:23:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 89%|███████████████████████████████████-----| 6559/7340 [237:40<28:18, 27.6 steps/min]2025-08-11 19:23:59,041 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:23:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6559/7340 [237:41<28:18, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:00,422 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:24:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:24:01,072 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:24:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6559/7340 [237:42<28:18, 27.6 steps/min]2025-08-11 19:24:01,753 - agent.ComputerAgent - INFO - Computer: click({'x': 926, 'y': 243})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 926, 'y': 243})\n",
+ "2025-08-11 19:24:02,462 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:24:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7bac89fe-36ca-4a8f-9dde-15747b2785bf/close \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6559/7340 [237:44<28:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:45<27:54, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/19680590-12ce-4590-9ac7-a2966cf205f3/close \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:47<27:54, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:48<27:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:49<27:54, 27.6 steps/min]2025-08-11 19:24:08,420 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:24:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:08, 2.88s/it]\u001b[92m19:24:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:53<27:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:55<27:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:08<00:02, 2.89s/it]2025-08-11 19:24:15,110 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 89%|███████████████████████████████████-----| 6569/7340 [237:56<27:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.34s/it]\n",
+ "\u001b[92m19:24:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.34s/it]27.6 steps/min]\n",
+ " 90%|███████████████████████████████████-----| 6571/7340 [237:58<27:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:24:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6571/7340 [238:00<27:51, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:24:18,959 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:24:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6571/7340 [238:02<27:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6571/7340 [238:03<27:51, 27.6 steps/min]2025-08-11 19:24:22,172 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:24:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6571/7340 [238:05<27:51, 27.6 steps/min]\u001b[92m19:24:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:24:25,232 - agent.ComputerAgent - INFO - Computer: click({'x': 958, 'y': 330, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 958, 'y': 330, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6572/7340 [238:07<27:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:27,407 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 101})\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:10<27:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/reset \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:11<27:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:12<27:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:24:31,205 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:24:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:14<27:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:15<27:48, 27.6 steps/min]2025-08-11 19:24:33,914 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:24:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:16<27:48, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:35,595 - agent.ComputerAgent - INFO - Computer: click({'x': 428, 'y': 143})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 428, 'y': 143})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6573/7340 [238:18<27:48, 27.6 steps/min]\u001b[92m19:24:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40945322-97a1-4827-b747-39d3f993fa3d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:19<27:43, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/36571ef9-5b2a-499c-92dc-16ca9627c11d/close \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:20<27:43, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:22<27:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:23<27:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:24:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:24:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:25<27:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:10, 3.48s/it]\u001b[92m19:24:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:27<27:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.80s/it]27.6 steps/min]\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:31<27:45, 27.6 steps/min]\u001b[92m19:24:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:24:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:32<27:45, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:33<27:45, 27.6 steps/min]\u001b[92m19:24:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:52,568 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 433})\n",
+ " 90%|███████████████████████████████████-----| 6575/7340 [238:34<27:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6576/7340 [238:35<27:43, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6576/7340 [238:36<27:43, 27.6 steps/min]\u001b[92m19:24:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:55,399 - agent.ComputerAgent - INFO - Computer: click({'x': 268, 'y': 298})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 268, 'y': 298})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6576/7340 [238:37<27:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:24:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:24:57,118 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6577/7340 [238:38<27:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:24:58,294 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6578/7340 [238:40<27:38, 27.6 steps/min]2025-08-11 19:24:58,963 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:24:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6578/7340 [238:41<27:38, 27.6 steps/min]\u001b[92m19:24:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:00,154 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:25:00,156 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 384})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 384})\n",
+ " 90%|███████████████████████████████████-----| 6579/7340 [238:43<27:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:02,861 - agent.ComputerAgent - INFO - Computer: click({'x': 625, 'y': 426})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 625, 'y': 426})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/reset \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6579/7340 [238:44<27:36, 27.6 steps/min]2025-08-11 19:25:03,503 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:25:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6580/7340 [238:45<27:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6580/7340 [238:46<27:34, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:05,658 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:25:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:06,330 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 124})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6580/7340 [238:48<27:34, 27.6 steps/min]2025-08-11 19:25:07,005 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:25:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:25:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:07,661 - agent.ComputerAgent - INFO - Computer: click({'x': 890, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 890, 'y': 616})\n",
+ " 90%|███████████████████████████████████-----| 6581/7340 [238:49<27:32, 27.6 steps/min]2025-08-11 19:25:08,835 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:25:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6583/7340 [238:50<27:27, 27.6 steps/min]2025-08-11 19:25:09,494 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:25:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f55deafd-5880-4477-aaf2-d27143befb59/close \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6583/7340 [238:51<27:28, 27.6 steps/min]\u001b[92m19:25:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:10,872 - agent.ComputerAgent - INFO - Computer: click({'x': 393, 'y': 516})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 393, 'y': 516})\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6583/7340 [238:52<27:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6584/7340 [238:53<27:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:25:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:25:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:13,469 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 632})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 632})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.74s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [238:56<27:23, 27.6 steps/min]2025-08-11 19:25:15,923 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]\u001b[92m19:25:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:16,604 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:25:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [238:58<27:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [238:59<27:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "\u001b[92m19:25:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:19,044 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:25:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [239:00<27:24, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [239:01<27:24, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [239:03<27:24, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:25:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:25:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:23,166 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 918, 'y': 462}, {'x': 984, 'y': 478}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 918, 'y': 462}, {'x': 984, 'y': 478}]})\n",
+ " 90%|███████████████████████████████████-----| 6585/7340 [239:04<27:24, 27.5 steps/min]2025-08-11 19:25:23,820 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:25:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6586/7340 [239:05<27:22, 27.5 steps/min]\u001b[92m19:25:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:25,023 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 431, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 612, 'scroll_x': 0, 'x': 431, 'y': 739})\n",
+ " 90%|███████████████████████████████████-----| 6586/7340 [239:06<27:22, 27.5 steps/min]\u001b[92m19:25:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:25,680 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 284})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 284})\n",
+ " 90%|███████████████████████████████████-----| 6587/7340 [239:07<27:20, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 90%|███████████████████████████████████-----| 6588/7340 [239:08<27:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6588/7340 [239:09<27:18, 27.5 steps/min]\u001b[92m19:25:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:28,919 - agent.ComputerAgent - INFO - Computer: click({'x': 531, 'y': 410})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 531, 'y': 410})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6588/7340 [239:10<27:18, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:30,602 - agent.ComputerAgent - INFO - Computer: click({'x': 897, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 897, 'y': 155})\n",
+ "\u001b[92m19:25:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6589/7340 [239:12<27:15, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:31,284 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:25:31,287 - agent.ComputerAgent - INFO - Computer: move({'x': 512, 'y': 766})\n",
+ "INFO:agent.ComputerAgent:Computer: move({'x': 512, 'y': 766})\n",
+ "\u001b[92m19:25:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:32,593 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:33,251 - agent.ComputerAgent - INFO - Computer: click({'x': 186, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 186, 'y': 150})\n",
+ "\u001b[92m19:25:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6590/7340 [239:14<27:13, 27.5 steps/min]\u001b[92m19:25:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:33,890 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 713})\n",
+ "2025-08-11 19:25:34,585 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:25:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:35,257 - agent.ComputerAgent - INFO - Computer: click({'x': 100, 'y': 391})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 100, 'y': 391})\n",
+ "\u001b[92m19:25:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:35,925 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:25:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6592/7340 [239:17<27:09, 27.5 steps/min]2025-08-11 19:25:36,581 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 587})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 587})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:25:37,951 - agent.ComputerAgent - INFO - Computer: type({'text': '20 cm'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '20 cm'})\n",
+ " 90%|███████████████████████████████████-----| 6594/7340 [239:19<27:04, 27.6 steps/min]2025-08-11 19:25:38,567 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:25:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:39,217 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:25:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:39,857 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:25:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:40,885 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:25:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6596/7340 [239:22<27:00, 27.6 steps/min]2025-08-11 19:25:41,545 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:25:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6596/7340 [239:23<27:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6596/7340 [239:24<27:00, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:43,913 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:25:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:25:44,575 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:25:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:25:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6596/7340 [239:27<27:00, 27.5 steps/min]2025-08-11 19:25:45,942 - agent.ComputerAgent - INFO - Computer: click({'x': 395, 'y': 376})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 395, 'y': 376})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:25:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:25:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6596/7340 [239:28<27:00, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:47,232 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 654})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 654})\n",
+ "2025-08-11 19:25:47,906 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:25:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:25:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:50,293 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:25:50,295 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 90%|███████████████████████████████████-----| 6597/7340 [239:32<26:58, 27.5 steps/min]2025-08-11 19:25:50,985 - agent.ComputerAgent - INFO - Computer: click({'x': 559, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 559, 'y': 101})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:25:51,655 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:25:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6599/7340 [239:33<26:53, 27.5 steps/min]\u001b[92m19:25:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:25:53,114 - agent.ComputerAgent - INFO - Computer: click({'x': 975, 'y': 65})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 975, 'y': 65})\n",
+ " 90%|███████████████████████████████████-----| 6600/7340 [239:34<26:51, 27.5 steps/min]2025-08-11 19:25:53,769 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:25:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:54,422 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:25:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:25:55,091 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:25:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:36<26:49, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:37<26:49, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:25:57,315 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:25:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:39<26:49, 27.5 steps/min]2025-08-11 19:25:57,991 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:25:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:40<26:49, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:25:59,782 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:25:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:25:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:41<26:50, 27.5 steps/min]2025-08-11 19:26:00,474 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:26:00,475 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "2025-08-11 19:26:01,154 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:26:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6601/7340 [239:42<26:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:02,521 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|███████████████████████████████████-----| 6602/7340 [239:44<26:47, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:26:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:03,206 - agent.ComputerAgent - INFO - Computer: click({'x': 426, 'y': 711})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 426, 'y': 711})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|███████████████████████████████████-----| 6602/7340 [239:45<26:48, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6603/7340 [239:48<26:45, 27.5 steps/min]\u001b[92m19:26:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:06,984 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 501})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 501})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:26:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:08,344 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 12})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 12})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|███████████████████████████████████-----| 6603/7340 [239:50<26:46, 27.5 steps/min]\u001b[92m19:26:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:09,032 - agent.ComputerAgent - INFO - Computer: click({'x': 878, 'y': 160})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 878, 'y': 160})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:10,370 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 19:26:10,992 - agent.ComputerAgent - INFO - Computer: click({'x': 235, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 235, 'y': 148})\n",
+ "\u001b[92m19:26:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:12,350 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ " 90%|███████████████████████████████████-----| 6605/7340 [239:54<26:41, 27.5 steps/min]2025-08-11 19:26:12,997 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:15,605 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:26:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:16,255 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:26:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:16,892 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 90%|████████████████████████████████████----| 6607/7340 [239:58<26:37, 27.5 steps/min]\u001b[92m19:26:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:26:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:18,092 - agent.ComputerAgent - INFO - Computer: click({'x': 198, 'y': 632})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 198, 'y': 632})\n",
+ " 90%|████████████████████████████████████----| 6608/7340 [239:59<26:35, 27.5 steps/min]\u001b[92m19:26:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:18,789 - agent.ComputerAgent - INFO - Computer: click({'x': 914, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 914, 'y': 617})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6609/7340 [240:02<26:33, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:21,840 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:26:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:22,531 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 632})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 632})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:23,829 - agent.ComputerAgent - INFO - Computer: type({'text': 'smtp.office365.com'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'smtp.office365.com'})\n",
+ "\u001b[92m19:26:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:25,137 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:26:25,138 - agent.ComputerAgent - INFO - Agent: Yes—here are reliable ways to standardize every text box to Liberation Sans Narrow in LibreOffice Impress without clicking each one.\n",
+ "\n",
+ "Method 1 (recommended): Change styles so all objects inherit the font\n",
+ "- Open your presentation.\n",
+ "- Press F11 to open the Styles deck.\n",
+ "- Presentation styles (for placeholders):\n",
+ " - Click the Presentation Styles icon.\n",
+ " - Right‑click each of Title, Subtitle, Outline 1–10 (or Outline), Notes, etc. > Modify > Font > choose Liberation Sans Narrow > OK.\n",
+ "- Drawing styles (for manual text boxes):\n",
+ " - Click the Drawing Styles icon.\n",
+ " - Right‑click Default > Modify > Font > Liberation Sans Narrow > OK.\n",
+ "- Remove any direct overrides so objects follow the styles:\n",
+ " - In the Slides pane, select all slides (click first, Shift‑click last).\n",
+ " - Press Ctrl+M (Format > Clear Direct Formatting). This makes existing text boxes pick up the new style font.\n",
+ "\n",
+ "Method 2: One‑click change with a macro (works even if text has direct formatting)\n",
+ "- Tools > Macros > Organize Macros > LibreOffice Basic > select your presentation > New.\n",
+ "- Paste this macro into the new module and Run:\n",
+ "\n",
+ "Sub SetAllTextToLiberationNarrow\n",
+ " Dim oDoc As Object, oSlides As Object, oSlide As Object\n",
+ " Dim oShape As Object\n",
+ " oDoc = ThisComponent\n",
+ " oSlides = oDoc.getDrawPages()\n",
+ " Dim i As Long, j As Long\n",
+ " For i = 0 To oSlides.getCount() - 1\n",
+ " oSlide = oSlides.getByIndex(i)\n",
+ " For j = 0 To oSlide.getCount() - 1\n",
+ " oShape = oSlide.getByIndex(j)\n",
+ " Call SetFontRecursive(oShape)\n",
+ " Next j\n",
+ " Next i\n",
+ "End Sub\n",
+ "\n",
+ "Sub SetFontRecursive(oShape As Object)\n",
+ " If oShape.supportsService(\"com.sun.star.drawing.GroupShape\") Then\n",
+ " Dim k As Long\n",
+ " For k = 0 To oShape.getCount() - 1\n",
+ " Call SetFontRecursive(oShape.getByIndex(k))\n",
+ " Next k\n",
+ " ElseIf oShape.supportsService(\"com.sun.star.drawing.Text\") Then\n",
+ " Dim oText As Object, oCur As Object\n",
+ " oText = oShape.getText()\n",
+ " oCur = oText.createTextCursor()\n",
+ " oCur.CharFontName = \"Liberation Sans Narrow\"\n",
+ " oCur.CharF...\n",
+ "INFO:agent.ComputerAgent:Agent: Yes—here are reliable ways to standardize every text box to Liberation Sans Narrow in LibreOffice Impress without clicking each one.\n",
+ "\n",
+ "Method 1 (recommended): Change styles so all objects inherit the font\n",
+ "- Open your presentation.\n",
+ "- Press F11 to open the Styles deck.\n",
+ "- Presentation styles (for placeholders):\n",
+ " - Click the Presentation Styles icon.\n",
+ " - Right‑click each of Title, Subtitle, Outline 1–10 (or Outline), Notes, etc. > Modify > Font > choose Liberation Sans Narrow > OK.\n",
+ "- Drawing styles (for manual text boxes):\n",
+ " - Click the Drawing Styles icon.\n",
+ " - Right‑click Default > Modify > Font > Liberation Sans Narrow > OK.\n",
+ "- Remove any direct overrides so objects follow the styles:\n",
+ " - In the Slides pane, select all slides (click first, Shift‑click last).\n",
+ " - Press Ctrl+M (Format > Clear Direct Formatting). This makes existing text boxes pick up the new style font.\n",
+ "\n",
+ "Method 2: One‑click change with a macro (works even if text has direct formatting)\n",
+ "- Tools > Macros > Organize Macros > LibreOffice Basic > select your presentation > New.\n",
+ "- Paste this macro into the new module and Run:\n",
+ "\n",
+ "Sub SetAllTextToLiberationNarrow\n",
+ " Dim oDoc As Object, oSlides As Object, oSlide As Object\n",
+ " Dim oShape As Object\n",
+ " oDoc = ThisComponent\n",
+ " oSlides = oDoc.getDrawPages()\n",
+ " Dim i As Long, j As Long\n",
+ " For i = 0 To oSlides.getCount() - 1\n",
+ " oSlide = oSlides.getByIndex(i)\n",
+ " For j = 0 To oSlide.getCount() - 1\n",
+ " oShape = oSlide.getByIndex(j)\n",
+ " Call SetFontRecursive(oShape)\n",
+ " Next j\n",
+ " Next i\n",
+ "End Sub\n",
+ "\n",
+ "Sub SetFontRecursive(oShape As Object)\n",
+ " If oShape.supportsService(\"com.sun.star.drawing.GroupShape\") Then\n",
+ " Dim k As Long\n",
+ " For k = 0 To oShape.getCount() - 1\n",
+ " Call SetFontRecursive(oShape.getByIndex(k))\n",
+ " Next k\n",
+ " ElseIf oShape.supportsService(\"com.sun.star.drawing.Text\") Then\n",
+ " Dim oText As Object, oCur As Object\n",
+ " oText = oShape.getText()\n",
+ " oCur = oText.createTextCursor()\n",
+ " oCur.CharFontName = \"Liberation Sans Narrow\"\n",
+ " oCur.CharF...\n",
+ "2025-08-11 19:26:25,833 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 3294\n",
+ " - prompt_tokens: 1964\n",
+ " - total_tokens: 5258\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2560\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0354\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 3294\n",
+ " - prompt_tokens: 1964\n",
+ " - total_tokens: 5258\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2560\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0354\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|████████████████████████████████████----| 6611/7340 [240:07<26:28, 27.5 steps/min]2025-08-11 19:26:26,517 - agent.ComputerAgent - INFO - Computer: double_click({'x': 884, 'y': 123})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 884, 'y': 123})\n",
+ "2025-08-11 19:26:27,223 - agent.ComputerAgent - INFO - Computer: click({'x': 344, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 344, 'y': 34})\n",
+ "2025-08-11 19:26:27,897 - agent.ComputerAgent - INFO - Computer: click({'x': 548, 'y': 249})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 548, 'y': 249})\n",
+ "2025-08-11 19:26:28,563 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:26:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|████████████████████████████████████----| 6613/7340 [240:10<26:24, 27.5 steps/min]2025-08-11 19:26:29,242 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:26:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:30,582 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'f1'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'f1'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:31,909 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 90%|████████████████████████████████████----| 6616/7340 [240:13<26:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:33,178 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 90%|████████████████████████████████████----| 6618/7340 [240:14<26:12, 27.5 steps/min]2025-08-11 19:26:33,810 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:26:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:34,481 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:26:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|████████████████████████████████████----| 6619/7340 [240:16<26:10, 27.5 steps/min]2025-08-11 19:26:35,113 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:26:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:35,982 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:26:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|████████████████████████████████████----| 6619/7340 [240:17<26:10, 27.5 steps/min]2025-08-11 19:26:37,325 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:26:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6619/7340 [240:19<26:10, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:38,682 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:26:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:39,380 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:26:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:26:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 90%|████████████████████████████████████----| 6619/7340 [240:21<26:10, 27.5 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:26:40,063 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:26:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:40,762 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 344, 'y': 137})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 344, 'y': 137})\n",
+ " 90%|████████████████████████████████████----| 6619/7340 [240:22<26:11, 27.5 steps/min]2025-08-11 19:26:41,401 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:26:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:42,073 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:26:42,723 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:26:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|████████████████████████████████████----| 6620/7340 [240:24<26:08, 27.5 steps/min]2025-08-11 19:26:43,402 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:26:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:45,407 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6620/7340 [240:29<26:09, 27.5 steps/min]\u001b[92m19:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:48,083 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:48,753 - agent.ComputerAgent - INFO - Computer: click({'x': 914, 'y': 660})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 914, 'y': 660})\n",
+ "\u001b[92m19:26:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 90%|████████████████████████████████████----| 6621/7340 [240:31<26:07, 27.5 steps/min]\u001b[92m19:26:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:50,093 - agent.ComputerAgent - INFO - Computer: click({'x': 641, 'y': 498})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 641, 'y': 498})\n",
+ "\u001b[92m19:26:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:50,755 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 134})\n",
+ "2025-08-11 19:26:51,455 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:26:51,456 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 386, 'y': 250})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 386, 'y': 250})\n",
+ "\u001b[92m19:26:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6622/7340 [240:33<26:04, 27.5 steps/min]2025-08-11 19:26:52,153 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 150})\n",
+ "2025-08-11 19:26:52,821 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:26:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|████████████████████████████████████----| 6626/7340 [240:35<25:55, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6626/7340 [240:36<25:55, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:26:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:26:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 90%|████████████████████████████████████----| 6626/7340 [240:37<25:55, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:26:56,855 - agent.ComputerAgent - INFO - Computer: click({'x': 46, 'y': 528})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 46, 'y': 528})\n",
+ "\u001b[92m19:26:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:26:57,527 - agent.ComputerAgent - INFO - Computer: click({'x': 731, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 731, 'y': 617})\n",
+ "2025-08-11 19:26:58,183 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:26:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 90%|████████████████████████████████████----| 6626/7340 [240:39<25:56, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:58,873 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:26:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:26:59,543 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:26:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:00,884 - agent.ComputerAgent - INFO - Computer: type({'text': '20'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '20'})\n",
+ " 90%|████████████████████████████████████----| 6628/7340 [240:42<25:51, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:01,553 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:27:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:02,260 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:27:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:02,945 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:27:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f4008ee-6c98-4905-9ade-965ea7842b64/close \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6648/7340 [240:44<25:03, 27.6 steps/min]2025-08-11 19:27:04,256 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:27:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6648/7340 [240:46<25:03, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6648/7340 [240:47<25:03, 27.6 steps/min]2025-08-11 19:27:05,962 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:27:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6648/7340 [240:48<25:03, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:07,803 - agent.ComputerAgent - INFO - Computer: type({'text': 'Extensions: Install from VSIX'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Extensions: Install from VSIX'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:09,824 - agent.ComputerAgent - INFO - Computer: get_current_url({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_current_url({})\n",
+ " 91%|████████████████████████████████████----| 6648/7340 [240:51<25:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:27:11,072 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]\u001b[92m19:27:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:12,563 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ " 91%|████████████████████████████████████----| 6649/7340 [240:54<25:02, 27.6 steps/min]\u001b[92m19:27:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:13,244 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:27:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.79s/it]\u001b[92m19:27:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.47s/it]\n",
+ "2025-08-11 19:27:16,242 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 91%|████████████████████████████████████----| 6649/7340 [240:57<25:02, 27.6 steps/min]2025-08-11 19:27:17,577 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:27:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6650/7340 [241:00<25:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:19,629 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 415})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 415})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6650/7340 [241:02<25:00, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:21,375 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 149})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:22,034 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 654})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 654})\n",
+ "\u001b[92m19:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6651/7340 [241:03<24:58, 27.6 steps/min]\u001b[92m19:27:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:22,661 - agent.ComputerAgent - INFO - Computer: click({'x': 205, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 205, 'y': 149})\n",
+ "2025-08-11 19:27:23,378 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 925, 'y': 564}, {'x': 987, 'y': 652}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 925, 'y': 564}, {'x': 987, 'y': 652}]})\n",
+ "2025-08-11 19:27:23,979 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6653/7340 [241:06<24:53, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:25,309 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:27:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 91%|████████████████████████████████████----| 6656/7340 [241:08<24:46, 27.6 steps/min]\u001b[92m19:27:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:27,695 - agent.ComputerAgent - INFO - Computer: click({'x': 386, 'y': 503})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 386, 'y': 503})\n",
+ "\u001b[92m19:27:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:28,339 - agent.ComputerAgent - INFO - Computer: click({'x': 585, 'y': 268})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 585, 'y': 268})\n",
+ "\u001b[92m19:27:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6656/7340 [241:10<24:47, 27.6 steps/min]2025-08-11 19:27:28,989 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 282})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 282})\n",
+ "2025-08-11 19:27:29,620 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:27:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6658/7340 [241:11<24:42, 27.6 steps/min]2025-08-11 19:27:30,311 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:27:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6659/7340 [241:14<24:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6659/7340 [241:15<24:40, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:34,711 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:27:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:27:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:35,363 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 105, 'y': 467})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 105, 'y': 467})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6659/7340 [241:17<24:40, 27.6 steps/min]2025-08-11 19:27:36,056 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:27:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:37,105 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:27:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6660/7340 [241:20<24:38, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:39,084 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:39,755 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:27:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6660/7340 [241:21<24:38, 27.6 steps/min]2025-08-11 19:27:40,445 - agent.ComputerAgent - INFO - Computer: click({'x': 303, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 303, 'y': 185})\n",
+ "2025-08-11 19:27:41,084 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:41,762 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:27:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:43,057 - agent.ComputerAgent - INFO - Computer: type({'text': 'sar -V'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sar -V'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:44,397 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6660/7340 [241:27<24:39, 27.6 steps/min]\u001b[92m19:27:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:46,370 - agent.ComputerAgent - INFO - Computer: click({'x': 302, 'y': 537})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 302, 'y': 537})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:47,743 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6663/7340 [241:29<24:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:48,434 - agent.ComputerAgent - INFO - Computer: click({'x': 284, 'y': 297})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 284, 'y': 297})\n",
+ "2025-08-11 19:27:49,071 - agent.ComputerAgent - INFO - Computer: click({'x': 799, 'y': 442})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 799, 'y': 442})\n",
+ "\u001b[92m19:27:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6664/7340 [241:30<24:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:49,712 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:27:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:50,401 - agent.ComputerAgent - INFO - Computer: click({'x': 641, 'y': 332})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 641, 'y': 332})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:27:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6666/7340 [241:33<24:25, 27.6 steps/min]\u001b[92m19:27:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:27:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:52,924 - agent.ComputerAgent - INFO - Computer: click({'x': 910, 'y': 617})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 910, 'y': 617})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6667/7340 [241:35<24:23, 27.6 steps/min]\u001b[92m19:27:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:27:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:27:54,238 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 91})\n",
+ "\u001b[92m19:27:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:27:54,930 - agent.ComputerAgent - INFO - Computer: click({'x': 249, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 249, 'y': 321})\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:37<24:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:57,091 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:38<24:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:57,794 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:27:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:27:58,484 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:27:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:59,113 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:40<24:16, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:27:59,785 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:27:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:00,461 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:28:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:42<24:16, 27.6 steps/min]2025-08-11 19:28:01,113 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:28:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:01,759 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:28:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:44<24:16, 27.6 steps/min]\u001b[92m19:28:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:03,154 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:28:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:45<24:17, 27.6 steps/min]2025-08-11 19:28:04,717 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:47<24:17, 27.6 steps/min]\u001b[92m19:28:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:06,116 - agent.ComputerAgent - INFO - Computer: click({'x': 186, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 186, 'y': 148})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 91%|████████████████████████████████████----| 6670/7340 [241:48<24:17, 27.6 steps/min]\u001b[92m19:28:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:07,304 - agent.ComputerAgent - INFO - Computer: click({'x': 130, 'y': 607})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 130, 'y': 607})\n",
+ " 91%|████████████████████████████████████----| 6672/7340 [241:51<24:12, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 91%|████████████████████████████████████----| 6672/7340 [241:52<24:12, 27.6 steps/min]\u001b[92m19:28:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:11,187 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:13,204 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 91%|████████████████████████████████████----| 6672/7340 [241:54<24:13, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:13,835 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:28:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6674/7340 [241:57<24:08, 27.6 steps/min]\u001b[92m19:28:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:16,572 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 564, 'x': 550, 'y': 221})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 564, 'x': 550, 'y': 221})\n",
+ "\u001b[92m19:28:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:17,870 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 175})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 175})\n",
+ " 91%|████████████████████████████████████----| 6675/7340 [241:59<24:06, 27.6 steps/min]\u001b[92m19:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:18,531 - agent.ComputerAgent - INFO - Computer: double_click({'x': 914, 'y': 644})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 914, 'y': 644})\n",
+ "2025-08-11 19:28:19,208 - agent.ComputerAgent - INFO - Computer: click({'x': 384, 'y': 532})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 384, 'y': 532})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6676/7340 [242:02<24:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:21,823 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 321})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:23,146 - agent.ComputerAgent - INFO - Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6678/7340 [242:05<23:59, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:24,477 - agent.ComputerAgent - INFO - Computer: click({'x': 24, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 24, 'y': 737})\n",
+ "2025-08-11 19:28:25,131 - agent.ComputerAgent - INFO - Computer: double_click({'x': 390, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 390, 'y': 277})\n",
+ "2025-08-11 19:28:25,798 - agent.ComputerAgent - INFO - Computer: click({'x': 392, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 392, 'y': 95})\n",
+ "\u001b[92m19:28:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6680/7340 [242:07<23:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:26,436 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 616})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 616})\n",
+ "2025-08-11 19:28:27,077 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:28:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:28:27,754 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:28:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6683/7340 [242:09<23:48, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:28:28,395 - agent.ComputerAgent - INFO - LLM processing started with 17 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 17 messages\n",
+ "\u001b[92m19:28:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6684/7340 [242:11<23:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:31,105 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:28:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6684/7340 [242:12<23:46, 27.6 steps/min]2025-08-11 19:28:31,794 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:28:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:32,446 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:28:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6684/7340 [242:14<23:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:33,125 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:28:33,805 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:28:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/963f0b0a-47d1-479c-9077-6c59023108fe/reset \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6685/7340 [242:15<23:44, 27.6 steps/min]2025-08-11 19:28:34,831 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:28:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6685/7340 [242:17<23:44, 27.6 steps/min]\u001b[92m19:28:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:36,102 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:28:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:28:36,765 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:28:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:28:37,463 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6685/7340 [242:19<23:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:38,174 - agent.ComputerAgent - INFO - Computer: click({'x': 284, 'y': 297})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 284, 'y': 297})\n",
+ "2025-08-11 19:28:38,834 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:28:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6685/7340 [242:21<23:44, 27.6 steps/min]\u001b[92m19:28:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:40,603 - agent.ComputerAgent - INFO - LLM processing started with 19 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 19 messages\n",
+ "\u001b[92m19:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:41,285 - agent.ComputerAgent - INFO - Computer: click({'x': 685, 'y': 543})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 685, 'y': 543})\n",
+ " 91%|████████████████████████████████████----| 6687/7340 [242:25<23:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:43,950 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:28:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6687/7340 [242:27<23:40, 27.6 steps/min]\u001b[92m19:28:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:46,504 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 69})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 69})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:47,838 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 91%|████████████████████████████████████----| 6687/7340 [242:29<23:40, 27.6 steps/min]\u001b[92m19:28:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:49,582 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:28:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:50,826 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ " 91%|████████████████████████████████████----| 6689/7340 [242:32<23:36, 27.6 steps/min]\u001b[92m19:28:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:51,520 - agent.ComputerAgent - INFO - Computer: click({'x': 687, 'y': 396})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 687, 'y': 396})\n",
+ "2025-08-11 19:28:52,529 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:28:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6690/7340 [242:34<23:34, 27.6 steps/min]2025-08-11 19:28:53,215 - agent.ComputerAgent - INFO - Computer: click({'x': 409, 'y': 318})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 409, 'y': 318})\n",
+ " 91%|████████████████████████████████████----| 6691/7340 [242:35<23:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:28:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:55,005 - agent.ComputerAgent - INFO - LLM processing started with 21 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 21 messages\n",
+ "\u001b[92m19:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6692/7340 [242:36<23:29, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:28:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:28:56,172 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 473, 'y': 219}, {'x': 801, 'y': 446}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 473, 'y': 219}, {'x': 801, 'y': 446}]})\n",
+ " 91%|████████████████████████████████████----| 6693/7340 [242:38<23:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:28:58,325 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:28:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6694/7340 [242:40<23:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:28:59,674 - agent.ComputerAgent - INFO - LLM processing started with 23 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 23 messages\n",
+ "\u001b[92m19:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:28:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:29:01,029 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:29:01,030 - agent.ComputerAgent - INFO - Computer: click({'x': 193, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 193, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6694/7340 [242:43<23:25, 27.6 steps/min]\u001b[92m19:29:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:02,346 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:29:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:29:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6695/7340 [242:44<23:23, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:04,093 - agent.ComputerAgent - INFO - Computer: click({'x': 488, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 488, 'y': 339})\n",
+ "\u001b[92m19:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:29:04,765 - agent.ComputerAgent - INFO - Computer: click({'x': 628, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 628, 'y': 34})\n",
+ "2025-08-11 19:29:05,425 - agent.ComputerAgent - INFO - Computer: double_click({'x': 449, 'y': 280})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 449, 'y': 280})\n",
+ "2025-08-11 19:29:06,117 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:29:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6695/7340 [242:47<23:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0180c5d2-a012-4261-b093-ed34f443f269/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/2d8a6e51-acdb-47b9-8ee4-f3085c741fd5/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 19:29:07,506 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:29:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6699/7340 [242:49<23:14, 27.6 steps/min]2025-08-11 19:29:08,186 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:29:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6699/7340 [242:50<23:14, 27.6 steps/min]2025-08-11 19:29:09,507 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:29:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:29:10,166 - agent.ComputerAgent - INFO - LLM processing started with 25 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 25 messages\n",
+ "\u001b[92m19:29:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6699/7340 [242:51<23:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "ERROR:asyncio:Unclosed client session\n",
+ "client_session: \n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6699/7340 [242:52<23:14, 27.6 steps/min]2025-08-11 19:29:11,849 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:29:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.65s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6702/7340 [242:55<23:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.60s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:29:15,569 - agent.ComputerAgent - INFO - Computer: click({'x': 889, 'y': 620, 'button': 'left'})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.63s/it]INFO:agent.ComputerAgent:Computer: click({'x': 889, 'y': 620, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m19:29:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/09c933ad-61bf-4498-b248-0df86e3aea78/close \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6703/7340 [242:58<23:05, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:17,727 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:29:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 91%|████████████████████████████████████----| 6704/7340 [242:59<23:03, 27.6 steps/min]2025-08-11 19:29:18,405 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:29:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:19,085 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:29:19,793 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 635})\n",
+ "\u001b[92m19:29:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 91%|████████████████████████████████████----| 6704/7340 [243:01<23:03, 27.6 steps/min]2025-08-11 19:29:20,435 - agent.ComputerAgent - INFO - LLM processing started with 27 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 27 messages\n",
+ "\u001b[92m19:29:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:29:21,112 - agent.ComputerAgent - INFO - Computer: double_click({'x': 371, 'y': 80})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 371, 'y': 80})\n",
+ "\u001b[92m19:29:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6705/7340 [243:04<23:01, 27.6 steps/min]2025-08-11 19:29:23,840 - agent.ComputerAgent - INFO - Computer: click({'x': 344, 'y': 121})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 344, 'y': 121})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:29:25,140 - agent.ComputerAgent - INFO - Agent: Here’s how to set up a unified inbox in Thunderbird so all your accounts’ emails show in one place:\n",
+ "\n",
+ "- Open Thunderbird.\n",
+ "- Go to the application menu (three horizontal lines in the top-right) > View > Folders > Unified.\n",
+ "- In the left sidebar you’ll now see a “Unified Folders” section with a single “Inbox” that aggregates all accounts.\n",
+ "\n",
+ "Optional: choose exactly which accounts/folders are included\n",
+ "- In the left sidebar, under Unified Folders, right‑click Inbox > Properties.\n",
+ "- Click “Select the folders to search,” then check the Inbox for each account you want included. Click Update.\n",
+ "\n",
+ "Notes and tips\n",
+ "- Thunderbird remembers this view; if it ever changes, repeat View > Folders > Unified.\n",
+ "- If you use POP and want all mail to physically go to one inbox, set a Global Inbox: Account Settings > your POP account > Server Settings > Advanced > Global Inbox (Local Folders).\n",
+ "- If a folder is missing from the Unified view, right‑click that folder > Subscribe (for IMAP) to ensure it’s subscribed, and confirm it’s checked in the Unified Inbox Properties.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Here’s how to set up a unified inbox in Thunderbird so all your accounts’ emails show in one place:\n",
+ "\n",
+ "- Open Thunderbird.\n",
+ "- Go to the application menu (three horizontal lines in the top-right) > View > Folders > Unified.\n",
+ "- In the left sidebar you’ll now see a “Unified Folders” section with a single “Inbox” that aggregates all accounts.\n",
+ "\n",
+ "Optional: choose exactly which accounts/folders are included\n",
+ "- In the left sidebar, under Unified Folders, right‑click Inbox > Properties.\n",
+ "- Click “Select the folders to search,” then check the Inbox for each account you want included. Click Update.\n",
+ "\n",
+ "Notes and tips\n",
+ "- Thunderbird remembers this view; if it ever changes, repeat View > Folders > Unified.\n",
+ "- If you use POP and want all mail to physically go to one inbox, set a Global Inbox: Account Settings > your POP account > Server Settings > Advanced > Global Inbox (Local Folders).\n",
+ "- If a folder is missing from the Unified view, right‑click that folder > Subscribe (for IMAP) to ensure it’s subscribed, and confirm it’s checked in the Unified Inbox Properties.\n",
+ "\n",
+ "Task completed\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.72s/it]2025-08-11 19:29:25,906 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 1266\n",
+ " - prompt_tokens: 12578\n",
+ " - total_tokens: 13844\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1024\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 11776\n",
+ " - response_cost: $0.0151\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 1266\n",
+ " - prompt_tokens: 12578\n",
+ " - total_tokens: 13844\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 1024\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 11776\n",
+ " - response_cost: $0.0151\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6708/7340 [243:09<22:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]2025-08-11 19:29:29,309 - agent.ComputerAgent - INFO - Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "\u001b[92m19:29:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6709/7340 [243:11<22:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6710/7340 [243:13<22:50, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:32,337 - agent.ComputerAgent - INFO - LLM processing started with 29 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 29 messages\n",
+ "\u001b[92m19:29:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:33,023 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 635})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 635})\n",
+ "\u001b[92m19:29:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6710/7340 [243:14<22:50, 27.6 steps/min]\u001b[92m19:29:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:33,677 - agent.ComputerAgent - INFO - Computer: click({'x': 28, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 28, 'y': 739})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:34,335 - agent.ComputerAgent - INFO - Computer: click({'x': 530, 'y': 417})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 530, 'y': 417})\n",
+ "\u001b[92m19:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:34,988 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:29:34,989 - agent.ComputerAgent - INFO - Computer: click({'x': 80, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 80, 'y': 181})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 91%|████████████████████████████████████----| 6711/7340 [243:17<22:48, 27.6 steps/min]\u001b[92m19:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:36,366 - agent.ComputerAgent - INFO - Computer: click({'x': 186, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 186, 'y': 148})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:36,994 - agent.ComputerAgent - INFO - Computer: click({'x': 85, 'y': 234})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 85, 'y': 234})\n",
+ "\u001b[92m19:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:29:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 91%|████████████████████████████████████----| 6714/7340 [243:18<22:41, 27.6 steps/min]2025-08-11 19:29:37,669 - agent.ComputerAgent - INFO - Computer: click({'x': 483, 'y': 267})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 483, 'y': 267})\n",
+ "2025-08-11 19:29:38,353 - agent.ComputerAgent - INFO - Computer: click({'x': 974, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 974, 'y': 34})\n",
+ "2025-08-11 19:29:38,999 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:29:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 92%|████████████████████████████████████----| 6717/7340 [243:20<22:34, 27.6 steps/min]2025-08-11 19:29:39,730 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 914, 'y': 671}, {'x': 984, 'y': 467}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 914, 'y': 671}, {'x': 984, 'y': 467}]})\n",
+ "2025-08-11 19:29:40,390 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:29:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6719/7340 [243:22<22:29, 27.6 steps/min]2025-08-11 19:29:41,077 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:29:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:29:42,161 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:29:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/18debd9e-6c58-4504-8a04-13cba683a254/close \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6720/7340 [243:23<22:27, 27.6 steps/min]2025-08-11 19:29:42,846 - agent.ComputerAgent - INFO - LLM processing started with 31 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 31 messages\n",
+ "\u001b[92m19:29:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6725/7340 [243:26<22:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6725/7340 [243:27<22:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f055f32-c606-4a58-91bc-c5990d4e8280/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6725/7340 [243:28<22:15, 27.6 steps/min]2025-08-11 19:29:48,280 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:29:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<22:16, 27.6 steps/min]2025-08-11 19:29:49,316 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:29:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]2025-08-11 19:29:50,192 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:29:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6725/7340 [243:32<22:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]27.6 steps/min]2025-08-11 19:29:51,889 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:29:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:29:52,572 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:29:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:29:53,444 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.71s/it]INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:29:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "\u001b[92m19:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6726/7340 [243:35<22:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:29:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:29:55,603 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ " 92%|████████████████████████████████████----| 6726/7340 [243:37<22:14, 27.6 steps/min]\u001b[92m19:29:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:29:56,811 - agent.ComputerAgent - INFO - LLM processing started with 33 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 33 messages\n",
+ "\u001b[92m19:29:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.94s/it]2025-08-11 19:29:58,185 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6726/7340 [243:39<22:14, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.78s/it]\u001b[92m19:29:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9a544504-3e48-48b2-8429-0a97e266ebfb/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/84a5d283-63f1-43fc-b483-76116d67f385/close \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.72s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.50s/it]\n",
+ "\u001b[92m19:30:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<22:14, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6733/7340 [243:44<21:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:04,484 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m19:30:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/77892268-14f2-4dfa-b58c-6a682f258679/close \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6733/7340 [243:46<21:58, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:02<00:07, 2.57s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:04, 2.01s/it]27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6733/7340 [243:48<21:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6733/7340 [243:49<21:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.83s/it]27.6 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:10,570 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:30:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6735/7340 [243:52<21:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6735/7340 [243:54<21:54, 27.6 steps/min]\u001b[92m19:30:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:13,843 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.78s/it]\n",
+ "\u001b[92m19:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 92%|████████████████████████████████████----| 6736/7340 [243:56<21:52, 27.6 steps/min]\u001b[92m19:30:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:15,234 - agent.ComputerAgent - INFO - Computer: click({'x': 652, 'y': 34})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 652, 'y': 34})\n",
+ " 92%|████████████████████████████████████----| 6736/7340 [243:57<21:52, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:17,640 - agent.ComputerAgent - INFO - Agent: Installed the extension from /home/user/test.vsix in VS Code. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: Installed the extension from /home/user/test.vsix in VS Code. Task completed.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:18,293 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 410\n",
+ " - prompt_tokens: 12824\n",
+ " - total_tokens: 13234\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 10752\n",
+ " - response_cost: $0.0080\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 410\n",
+ " - prompt_tokens: 12824\n",
+ " - total_tokens: 13234\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 384\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 10752\n",
+ " - response_cost: $0.0080\n",
+ "\u001b[92m19:30:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6738/7340 [244:00<21:47, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:18,980 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:30:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:30:19,648 - agent.ComputerAgent - INFO - Computer: click({'x': 132, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 132, 'y': 155})\n",
+ " 92%|████████████████████████████████████----| 6738/7340 [244:01<21:48, 27.6 steps/min]\u001b[92m19:30:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:30:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 92%|████████████████████████████████████----| 6739/7340 [244:02<21:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6739/7340 [244:03<21:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:22,346 - agent.ComputerAgent - INFO - Computer: click({'x': 260, 'y': 80})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 260, 'y': 80})\n",
+ "2025-08-11 19:30:22,985 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:30:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6739/7340 [244:04<21:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 92%|████████████████████████████████████----| 6747/7340 [244:05<21:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/bcec4523-df7a-48b5-aea1-8d7c632a6dc4/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6747/7340 [244:06<21:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:25,849 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:30:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:27,174 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 474})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 474})\n",
+ "2025-08-11 19:30:27,803 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6747/7340 [244:09<21:27, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:30:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:30:28,491 - agent.ComputerAgent - INFO - Computer: double_click({'x': 87, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 87, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:29,173 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:30:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6748/7340 [244:10<21:25, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ " 92%|████████████████████████████████████----| 6750/7340 [244:12<21:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.72s/it]2025-08-11 19:30:32,473 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:30:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6750/7340 [244:14<21:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6750/7340 [244:15<21:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.42s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6750/7340 [244:16<21:21, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:35,193 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:30:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:30:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:35,885 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:30:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:36,559 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 149})\n",
+ " 92%|████████████████████████████████████----| 6750/7340 [244:18<21:21, 27.6 steps/min]\u001b[92m19:30:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "2025-08-11 19:30:37,227 - agent.ComputerAgent - INFO - Computer: click({'x': 263, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 263, 'y': 148})\n",
+ "\u001b[92m19:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:30:37,904 - agent.ComputerAgent - INFO - Computer: click({'x': 106, 'y': 739})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 106, 'y': 739})\n",
+ " 92%|████████████████████████████████████----| 6752/7340 [244:19<21:16, 27.6 steps/min]\u001b[92m19:30:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:38,604 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 149})\n",
+ "\u001b[92m19:30:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:30:39,248 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 472, 'y': 422}, {'x': 534, 'y': 422}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 472, 'y': 422}, {'x': 534, 'y': 422}]})\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:21<21:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:22<21:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:24<21:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/73c70c0d-c1a0-401f-83c0-063e983abd6c/close \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:25<21:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:26<21:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:45,980 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:30:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:27<21:07, 27.6 steps/min]2025-08-11 19:30:47,036 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:30:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:28<21:07, 27.6 steps/min]2025-08-11 19:30:47,727 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:30:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:30:48,368 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:30:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:30<21:08, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:30:49,020 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:30:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:31<21:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:51,572 - agent.ComputerAgent - INFO - Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sar -V\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:30:52,954 - agent.ComputerAgent - INFO - Computer: type({'text': '=A2/1000000'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=A2/1000000'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6756/7340 [244:34<21:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:30:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:30:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.75s/it]2025-08-11 19:30:55,237 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:30:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6758/7340 [244:36<21:03, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6758/7340 [244:37<21:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.66s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.37s/it]27.6 steps/min]\n",
+ " 92%|████████████████████████████████████----| 6758/7340 [244:41<21:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:00,992 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:31:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6758/7340 [244:42<21:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:02,379 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:31:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:03,692 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:31:03,694 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win+i'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win+i'})\n",
+ " 92%|████████████████████████████████████----| 6758/7340 [244:45<21:04, 27.6 steps/min]2025-08-11 19:31:04,344 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 545, 'scroll_x': 0, 'x': 523, 'y': 454})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 545, 'scroll_x': 0, 'x': 523, 'y': 454})\n",
+ "2025-08-11 19:31:05,378 - agent.ComputerAgent - INFO - Computer: click({'x': 591, 'y': 520})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 591, 'y': 520})\n",
+ "\u001b[92m19:31:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:06,672 - agent.ComputerAgent - INFO - Computer: type({'text': 'Month\\tTotal\\nJan\\t\\nFeb\\t\\nMar\\t\\nApr\\t\\nMay\\t\\nJun\\t'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Month\\tTotal\\nJan\\t\\nFeb\\t\\nMar\\t\\nApr\\t\\nMay\\t\\nJun\\t'})\n",
+ "2025-08-11 19:31:07,378 - agent.ComputerAgent - INFO - Computer: click({'x': 600, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 600, 'y': 35})\n",
+ "2025-08-11 19:31:08,028 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ " 92%|████████████████████████████████████----| 6759/7340 [244:49<21:02, 27.6 steps/min]\u001b[92m19:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6763/7340 [244:51<20:53, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:10,038 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:31:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:31:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:10,727 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 164})\n",
+ " 92%|████████████████████████████████████----| 6763/7340 [244:52<20:53, 27.6 steps/min]\u001b[92m19:31:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:11,440 - agent.ComputerAgent - INFO - Computer: click({'x': 186, 'y': 150})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 186, 'y': 150})\n",
+ " 92%|████████████████████████████████████----| 6764/7340 [244:53<20:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/reset \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6765/7340 [244:55<20:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:15,289 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6765/7340 [244:57<20:49, 27.6 steps/min]2025-08-11 19:31:15,969 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:31:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:16,638 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:31:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [244:58<20:46, 27.6 steps/min]2025-08-11 19:31:17,280 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:31:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:17,951 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [244:59<20:47, 27.6 steps/min]2025-08-11 19:31:18,599 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:31:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:00<20:47, 27.6 steps/min]2025-08-11 19:31:19,799 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:31:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:31:20,467 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:31:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:02<20:47, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:03<20:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:04<20:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:23,332 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:31:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:31:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:05<20:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:24,673 - agent.ComputerAgent - INFO - Computer: double_click({'x': 984, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 984, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6766/7340 [245:07<20:47, 27.6 steps/min]\u001b[92m19:31:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:26,050 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:31:26,051 - agent.ComputerAgent - INFO - Computer: click({'x': 647, 'y': 476})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 647, 'y': 476})\n",
+ "\u001b[92m19:31:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:26,797 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 741})\n",
+ " 92%|████████████████████████████████████----| 6769/7340 [245:09<20:40, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:28,501 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:31:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:31:29,955 - agent.ComputerAgent - INFO - Computer: type({'text': 'cd ~/Desktop\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'cd ~/Desktop\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:31,772 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6769/7340 [245:13<20:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:32,409 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:31:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:33,082 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/052ac585-1998-46b2-9ac5-0dc192aeba02/close \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6771/7340 [245:14<20:36, 27.6 steps/min]2025-08-11 19:31:33,738 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:31:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:35,070 - agent.ComputerAgent - INFO - Computer: type({'text': 'Settings'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Settings'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:38,089 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:31:38,090 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ " 92%|████████████████████████████████████----| 6771/7340 [245:19<20:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6773/7340 [245:21<20:32, 27.6 steps/min]2025-08-11 19:31:40,099 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:31:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:31:40,769 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:31:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:31:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6773/7340 [245:23<20:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.83s/it]\u001b[92m19:31:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.69s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:44,870 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:31:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.65s/it]27.6 steps/min]2025-08-11 19:31:45,540 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:31:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.41s/it]\n",
+ "\u001b[92m19:31:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6773/7340 [245:28<20:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:31:47,684 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:31:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6773/7340 [245:29<20:33, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:48,843 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 185})\n",
+ " 92%|████████████████████████████████████----| 6773/7340 [245:30<20:33, 27.6 steps/min]\u001b[92m19:31:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:49,546 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 165})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 165})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6774/7340 [245:32<20:30, 27.6 steps/min]\u001b[92m19:31:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:51,586 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 390})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 390})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:52,870 - agent.ComputerAgent - INFO - Computer: click({'x': 174, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 174, 'y': 149})\n",
+ "\u001b[92m19:31:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6775/7340 [245:34<20:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:31:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:53,553 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:31:53,554 - agent.ComputerAgent - INFO - Computer: click({'x': 648, 'y': 367})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 648, 'y': 367})\n",
+ "\u001b[92m19:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:54,243 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 230})\n",
+ " 92%|████████████████████████████████████----| 6777/7340 [245:35<20:24, 27.6 steps/min]\u001b[92m19:31:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:54,982 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:31:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6779/7340 [245:37<20:19, 27.6 steps/min]\u001b[92m19:31:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:31:56,318 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 741})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 741})\n",
+ "\u001b[92m19:31:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:57,023 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 90, 'y': 35}, {'x': 688, 'y': 34}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 90, 'y': 35}, {'x': 688, 'y': 34}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:31:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6780/7340 [245:39<20:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:58,397 - agent.ComputerAgent - INFO - Computer: click({'x': 388, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 388, 'y': 128})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 92%|████████████████████████████████████----| 6782/7340 [245:40<20:12, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:31:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:31:59,958 - agent.ComputerAgent - INFO - Computer: click({'x': 545, 'y': 142})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 545, 'y': 142})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6783/7340 [245:41<20:10, 27.6 steps/min]2025-08-11 19:32:00,620 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:32:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:01,291 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:32:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6784/7340 [245:43<20:08, 27.6 steps/min]2025-08-11 19:32:01,980 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:32:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:03,336 - agent.ComputerAgent - INFO - Computer: type({'text': 'sar -u 1 30 > System_Resources_Report.txt\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sar -u 1 30 > System_Resources_Report.txt\\n'})\n",
+ " 92%|████████████████████████████████████----| 6784/7340 [245:45<20:08, 27.6 steps/min]2025-08-11 19:32:03,959 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:32:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:04,639 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:32:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:46<20:06, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:05,820 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:32:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:47<20:06, 27.6 steps/min]2025-08-11 19:32:06,520 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:32:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:48<20:06, 27.6 steps/min]2025-08-11 19:32:07,704 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:32:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:08,380 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:32:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:50<20:06, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:10,389 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:32:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:53<20:06, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:12,779 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:32:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:54<20:06, 27.6 steps/min]2025-08-11 19:32:13,458 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 169})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 169})\n",
+ "\u001b[92m19:32:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:14,530 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 182})\n",
+ " 92%|████████████████████████████████████----| 6785/7340 [245:56<20:07, 27.6 steps/min]2025-08-11 19:32:15,198 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:32:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:16,538 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'win'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'win'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6787/7340 [245:58<20:02, 27.6 steps/min]\u001b[92m19:32:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:19,209 - agent.ComputerAgent - INFO - Computer: type({'text': 'Dublin'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Dublin'})\n",
+ " 92%|████████████████████████████████████----| 6788/7340 [246:00<20:00, 27.6 steps/min]\u001b[92m19:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:19,877 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:32:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:20,569 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 183})\n",
+ "\u001b[92m19:32:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:21,243 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 219})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 219})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 92%|████████████████████████████████████----| 6789/7340 [246:04<19:58, 27.6 steps/min]\u001b[92m19:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:32:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:32:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:23,644 - agent.ComputerAgent - INFO - Computer: click({'x': 188, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 188, 'y': 190})\n",
+ " 93%|█████████████████████████████████████---| 6791/7340 [246:05<19:53, 27.6 steps/min]\u001b[92m19:32:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:24,280 - agent.ComputerAgent - INFO - Computer: click({'x': 123, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 123, 'y': 178})\n",
+ " 93%|█████████████████████████████████████---| 6793/7340 [246:07<19:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 93%|█████████████████████████████████████---| 6793/7340 [246:08<19:49, 27.6 steps/min]\u001b[92m19:32:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:27,608 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 476})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 476})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6793/7340 [246:09<19:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:28,295 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:32:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:28,953 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:32:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6794/7340 [246:10<19:47, 27.6 steps/min]2025-08-11 19:32:29,623 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:32:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6794/7340 [246:12<19:47, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:30,976 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:32:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:31,628 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:32:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6794/7340 [246:13<19:47, 27.6 steps/min]2025-08-11 19:32:32,293 - agent.ComputerAgent - INFO - Computer: click({'x': 534, 'y': 554})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 534, 'y': 554})\n",
+ "2025-08-11 19:32:32,937 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:32:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6794/7340 [246:14<19:47, 27.6 steps/min]2025-08-11 19:32:33,607 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:32:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:34,249 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:32:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6795/7340 [246:17<19:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:36,916 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:32:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6795/7340 [246:18<19:45, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 93%|█████████████████████████████████████---| 6795/7340 [246:19<19:45, 27.6 steps/min]2025-08-11 19:32:38,805 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:32:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:39,495 - agent.ComputerAgent - INFO - Computer: click({'x': 173, 'y': 149})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 173, 'y': 149})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6795/7340 [246:21<19:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:40,803 - agent.ComputerAgent - INFO - Computer: type({'text': 'wc -l System_Resources_Report.txt\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'wc -l System_Resources_Report.txt\\n'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6796/7340 [246:23<19:43, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:42,099 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:32:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6797/7340 [246:24<19:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:32:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:44,597 - agent.ComputerAgent - INFO - Computer: double_click({'x': 482, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 482, 'y': 277})\n",
+ " 93%|█████████████████████████████████████---| 6797/7340 [246:26<19:41, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:32:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:46,644 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.B2:B11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.B2:B11)'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:47,299 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:32:47,300 - agent.ComputerAgent - INFO - Computer: click({'x': 578, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 578, 'y': 286})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6798/7340 [246:29<19:39, 27.6 steps/min]\u001b[92m19:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:32:48,654 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 416})\n",
+ "\u001b[92m19:32:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:49,313 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 416})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:32:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6800/7340 [246:31<19:34, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:50,681 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 254})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 254})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:32:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6802/7340 [246:33<19:30, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:52,640 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:32:52,640 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 524})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:54,035 - agent.ComputerAgent - INFO - Computer: type({'text': '=A2/1000000'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=A2/1000000'})\n",
+ "\u001b[92m19:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:32:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6803/7340 [246:35<19:27, 27.6 steps/min]2025-08-11 19:32:54,689 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 481})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 481})\n",
+ "2025-08-11 19:32:55,406 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 193})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 193})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6805/7340 [246:37<19:23, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:32:56,702 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:32:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:57,376 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:32:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:32:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6807/7340 [246:39<19:18, 27.6 steps/min]2025-08-11 19:32:58,080 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:32:58,735 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:32:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:32:59,441 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:32:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6807/7340 [246:41<19:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6808/7340 [246:42<19:16, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:01,450 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:33:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:02,154 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:33:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6808/7340 [246:43<19:16, 27.6 steps/min]2025-08-11 19:33:02,855 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:33:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:33:03,530 - agent.ComputerAgent - INFO - Computer: click({'x': 633, 'y': 473})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 633, 'y': 473})\n",
+ "\u001b[92m19:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6808/7340 [246:45<19:16, 27.6 steps/min]\u001b[92m19:33:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:04,220 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:04,896 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6809/7340 [246:46<19:14, 27.6 steps/min]2025-08-11 19:33:05,586 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 424, 'y': 418}, {'x': 527, 'y': 226}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 424, 'y': 418}, {'x': 527, 'y': 226}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6809/7340 [246:48<19:14, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:06,917 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:33:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:33:07,597 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6810/7340 [246:49<19:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:08,272 - agent.ComputerAgent - INFO - Computer: click({'x': 946, 'y': 750})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 946, 'y': 750})\n",
+ "2025-08-11 19:33:08,957 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:33:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6811/7340 [246:51<19:10, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6812/7340 [246:52<19:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51954fb4-34ed-4511-b2fd-a6169b5ea5d3/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:13,491 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 93%|█████████████████████████████████████---| 6812/7340 [246:55<19:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:14,808 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6813/7340 [246:57<19:06, 27.6 steps/min]2025-08-11 19:33:16,837 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:33:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:33:17,526 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:33:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6814/7340 [246:59<19:03, 27.6 steps/min]2025-08-11 19:33:18,187 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:33:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6814/7340 [247:00<19:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:20,027 - agent.ComputerAgent - INFO - Computer: click({'x': 89, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 89, 'y': 185})\n",
+ " 93%|█████████████████████████████████████---| 6814/7340 [247:01<19:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:21,352 - agent.ComputerAgent - INFO - Computer: type({'text': 'find / -type f -name \"secret.docx\" 2>/dev/null'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'find / -type f -name \"secret.docx\" 2>/dev/null'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6815/7340 [247:03<19:01, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:22,639 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:33:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:33:23,336 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:33:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6816/7340 [247:05<18:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:33:24,938 - agent.ComputerAgent - INFO - Agent: CPU statistics were collected with: sar -u 1 30 and saved to:\n",
+ "~/Desktop/System_Resources_Report.txt\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: CPU statistics were collected with: sar -u 1 30 and saved to:\n",
+ "~/Desktop/System_Resources_Report.txt\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 19:33:25,858 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 549\n",
+ " - prompt_tokens: 10106\n",
+ " - total_tokens: 10655\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 8704\n",
+ " - response_cost: $0.0083\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 549\n",
+ " - prompt_tokens: 10106\n",
+ " - total_tokens: 10655\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 512\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 8704\n",
+ " - response_cost: $0.0083\n",
+ " 93%|█████████████████████████████████████---| 6817/7340 [247:08<18:57, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6817/7340 [247:09<18:57, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:33:28,687 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:33:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.67s/it]\u001b[92m19:33:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.39s/it]27.6 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:33:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/775a5b67-2406-42b8-86e5-243e01b8dc27/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6824/7340 [247:13<18:41, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:33,758 - agent.ComputerAgent - INFO - Computer: type({'text': 'Baby Justin Bieber.mp3'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Baby Justin Bieber.mp3'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:34,422 - agent.ComputerAgent - INFO - Computer: click({'x': 313, 'y': 408})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 313, 'y': 408})\n",
+ "\u001b[92m19:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6824/7340 [247:16<18:41, 27.6 steps/min]\u001b[92m19:33:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:35,076 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 233})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 233})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:35,713 - agent.ComputerAgent - INFO - Computer: click({'x': 554, 'y': 100})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 554, 'y': 100})\n",
+ "\u001b[92m19:33:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:37,039 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "\u001b[92m19:33:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:38,069 - agent.ComputerAgent - INFO - Computer: click({'x': 219, 'y': 351})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 219, 'y': 351})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6826/7340 [247:20<18:37, 27.6 steps/min]\u001b[92m19:33:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:39,403 - agent.ComputerAgent - INFO - Computer: click({'x': 955, 'y': 752})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 955, 'y': 752})\n",
+ "2025-08-11 19:33:40,111 - agent.ComputerAgent - INFO - Computer: click({'x': 366, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 366, 'y': 230})\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:33:40,740 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:33:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6829/7340 [247:22<18:30, 27.6 steps/min]2025-08-11 19:33:41,417 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:33:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.77s/it]27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6831/7340 [247:25<18:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.63s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6831/7340 [247:27<18:26, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 19:33:46,934 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6831/7340 [247:28<18:26, 27.6 steps/min]2025-08-11 19:33:47,606 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:33:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:48,546 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:33:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 93%|█████████████████████████████████████---| 6832/7340 [247:30<18:24, 27.6 steps/min]\u001b[92m19:33:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:33:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:49,904 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 190})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 190})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:50,568 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:33:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:51,218 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6832/7340 [247:32<18:24, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:33:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:52,255 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 183})\n",
+ "\u001b[92m19:33:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6833/7340 [247:33<18:22, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:52,907 - agent.ComputerAgent - INFO - Computer: click({'x': 129, 'y': 172})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 129, 'y': 172})\n",
+ "2025-08-11 19:33:53,528 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:33:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6834/7340 [247:35<18:19, 27.6 steps/min]2025-08-11 19:33:54,227 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:33:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:33:55,279 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 524, 'y': 221}, {'x': 737, 'y': 617}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 524, 'y': 221}, {'x': 737, 'y': 617}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:33:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6835/7340 [247:37<18:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:33:56,617 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:33:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:33:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:33:57,284 - agent.ComputerAgent - INFO - Computer: click({'x': 977, 'y': 37})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 977, 'y': 37})\n",
+ " 93%|█████████████████████████████████████---| 6836/7340 [247:39<18:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6837/7340 [247:40<18:13, 27.6 steps/min]2025-08-11 19:33:59,476 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:33:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6837/7340 [247:41<18:13, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6837/7340 [247:42<18:13, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:01,338 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:02,051 - agent.ComputerAgent - INFO - Computer: click({'x': 369, 'y': 240})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 369, 'y': 240})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6837/7340 [247:43<18:13, 27.6 steps/min]2025-08-11 19:34:02,705 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:34:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/31367309-0055-409a-a992-edf729fb010c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:03,387 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:34:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6838/7340 [247:45<18:11, 27.6 steps/min]2025-08-11 19:34:04,075 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6838/7340 [247:46<18:11, 27.6 steps/min]2025-08-11 19:34:04,756 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:34:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6838/7340 [247:47<18:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:06,627 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6838/7340 [247:48<18:11, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:07,284 - agent.ComputerAgent - INFO - Computer: click({'x': 303, 'y': 278})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 303, 'y': 278})\n",
+ " 93%|█████████████████████████████████████---| 6838/7340 [247:49<18:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6839/7340 [247:50<18:09, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:10,250 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.C2:C11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.C2:C11)'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:34:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6839/7340 [247:52<18:09, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:11,647 - agent.ComputerAgent - INFO - Computer: click({'x': 242, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 242, 'y': 230})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:34:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6840/7340 [247:54<18:07, 27.6 steps/min]2025-08-11 19:34:12,978 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:13,649 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6841/7340 [247:55<18:05, 27.6 steps/min]\u001b[92m19:34:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:14,300 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:34:14,301 - agent.ComputerAgent - INFO - Computer: click({'x': 18, 'y': 385})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 18, 'y': 385})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6842/7340 [247:56<18:02, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:15,634 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:16,340 - agent.ComputerAgent - INFO - Computer: click({'x': 664, 'y': 473})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 664, 'y': 473})\n",
+ " 93%|█████████████████████████████████████---| 6844/7340 [247:59<17:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6844/7340 [248:00<17:58, 27.6 steps/min]2025-08-11 19:34:19,018 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:34:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6844/7340 [248:01<17:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:20,349 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:34:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6844/7340 [248:02<17:58, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:21,659 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:34:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:22,327 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 713})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6844/7340 [248:04<17:58, 27.6 steps/min]\u001b[92m19:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:23,658 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:25,017 - agent.ComputerAgent - INFO - Agent: Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: Task completed\n",
+ "2025-08-11 19:34:25,679 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 139\n",
+ " - prompt_tokens: 6040\n",
+ " - total_tokens: 6179\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0089\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 139\n",
+ " - prompt_tokens: 6040\n",
+ " - total_tokens: 6179\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0089\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:34:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6846/7340 [248:08<17:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:26,994 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 184})\n",
+ "\u001b[92m19:34:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:34:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:34:28,341 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 737, 'y': 619}, {'x': 187, 'y': 620}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 737, 'y': 619}, {'x': 187, 'y': 620}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6846/7340 [248:10<17:54, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:34:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:29,665 - agent.ComputerAgent - INFO - Computer: click({'x': 702, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 702, 'y': 238})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:31,003 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m19:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:34:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6848/7340 [248:12<17:49, 27.6 steps/min]2025-08-11 19:34:31,655 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 124})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 124})\n",
+ "2025-08-11 19:34:32,341 - agent.ComputerAgent - INFO - Computer: click({'x': 538, 'y': 608})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 538, 'y': 608})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6850/7340 [248:14<17:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/5e73167c-1836-4752-b7e8-57434e5d7875/reset \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:34,285 - agent.ComputerAgent - INFO - Computer: click({'x': 306, 'y': 416})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 306, 'y': 416})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6852/7340 [248:17<17:40, 27.6 steps/min]\u001b[92m19:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:34:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:36,337 - agent.ComputerAgent - INFO - Computer: click({'x': 129, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 129, 'y': 182})\n",
+ "\u001b[92m19:34:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:36,993 - agent.ComputerAgent - INFO - Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 989, 'y': 643})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6853/7340 [248:19<17:38, 27.6 steps/min]\u001b[92m19:34:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:34:38,296 - agent.ComputerAgent - INFO - Computer: click({'x': 213, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 213, 'y': 166})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:38,930 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:34:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:34:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 93%|█████████████████████████████████████---| 6855/7340 [248:20<17:34, 27.6 steps/min]2025-08-11 19:34:39,597 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 77})\n",
+ "2025-08-11 19:34:40,246 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6856/7340 [248:22<17:32, 27.6 steps/min]2025-08-11 19:34:40,933 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:34:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:41,640 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:34:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:23<17:29, 27.6 steps/min]2025-08-11 19:34:42,299 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:34:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:34:42,982 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:24<17:29, 27.6 steps/min]2025-08-11 19:34:43,670 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:34:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:44,668 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:34:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:26<17:30, 27.6 steps/min]2025-08-11 19:34:45,369 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:34:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:46,019 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:27<17:30, 27.6 steps/min]2025-08-11 19:34:46,680 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:34:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:34:47,368 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:34:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:34:48,069 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:34:48,749 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:30<17:30, 27.6 steps/min]\u001b[92m19:34:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:34:49,439 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:34:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:31<17:30, 27.6 steps/min]2025-08-11 19:34:50,668 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:34:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 93%|█████████████████████████████████████---| 6857/7340 [248:33<17:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:52,987 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.D2:D11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.D2:D11)'})\n",
+ " 93%|█████████████████████████████████████---| 6858/7340 [248:36<17:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:56,377 - agent.ComputerAgent - INFO - Computer: click({'x': 98, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 98, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 93%|█████████████████████████████████████---| 6858/7340 [248:38<17:28, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:34:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:34:58,728 - agent.ComputerAgent - INFO - Computer: type({'text': \"find ~ -type f -name 'secret.docx'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"find ~ -type f -name 'secret.docx'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b928bd01-f1b7-4f34-accf-acb6aec5d8cd/close \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6872/7340 [248:40<16:56, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<16:53, 27.6 steps/min]2025-08-11 19:35:00,742 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:35:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:35:01,450 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:35:01,451 - agent.ComputerAgent - INFO - Computer: click({'x': 509, 'y': 362})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 509, 'y': 362})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.71s/it]2025-08-11 19:35:02,878 - agent.ComputerAgent - INFO - Computer: type({'text': 'background'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'background'})\n",
+ " 94%|█████████████████████████████████████---| 6873/7340 [248:44<16:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.65s/it]\u001b[92m19:35:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:05,256 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:35:05,258 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.36s/it]\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:35:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:08,941 - agent.ComputerAgent - INFO - Computer: type({'text': 'Period Rate (%)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Period Rate (%)'})\n",
+ " 94%|█████████████████████████████████████---| 6875/7340 [248:50<16:49, 27.6 steps/min]\u001b[92m19:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:09,608 - agent.ComputerAgent - INFO - Computer: click({'x': 605, 'y': 501})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 605, 'y': 501})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:10,230 - agent.ComputerAgent - INFO - Computer: click({'x': 694, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 694, 'y': 136})\n",
+ "\u001b[92m19:35:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:11,547 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 94%|█████████████████████████████████████---| 6877/7340 [248:53<16:45, 27.6 steps/min]2025-08-11 19:35:12,183 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:35:12,184 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_x': 0, 'scroll_y': 640, 'x': 965, 'y': 702})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_x': 0, 'scroll_y': 640, 'x': 965, 'y': 702})\n",
+ "\u001b[92m19:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:12,891 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 331})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 331})\n",
+ "\u001b[92m19:35:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6880/7340 [248:54<16:38, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:35:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:14,120 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 186, 'y': 623}, {'x': 680, 'y': 219}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 186, 'y': 623}, {'x': 680, 'y': 219}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6882/7340 [248:55<16:33, 27.6 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:35:14,746 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:35:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:35:15,440 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:35:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6883/7340 [248:57<16:31, 27.6 steps/min]2025-08-11 19:35:16,103 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:35:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6883/7340 [248:59<16:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:18,288 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:35:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6883/7340 [249:01<16:32, 27.6 steps/min]\u001b[92m19:35:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:20,623 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 35})\n",
+ "2025-08-11 19:35:21,629 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:35:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:35:22,260 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6883/7340 [249:04<16:32, 27.6 steps/min]2025-08-11 19:35:22,958 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:35:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:23,631 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:35:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:25,773 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 94%|█████████████████████████████████████---| 6884/7340 [249:07<16:30, 27.6 steps/min]\u001b[92m19:35:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:27,072 - agent.ComputerAgent - INFO - Computer: type({'text': '2560'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '2560'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:27,720 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 128, 'y': 185}, {'x': 129, 'y': 272}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 128, 'y': 185}, {'x': 129, 'y': 272}]})\n",
+ " 94%|█████████████████████████████████████---| 6885/7340 [249:09<16:27, 27.6 steps/min]2025-08-11 19:35:28,371 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:35:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:29,041 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:35:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:30,360 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.E2:E11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.E2:E11)'})\n",
+ " 94%|█████████████████████████████████████---| 6887/7340 [249:12<16:23, 27.6 steps/min]2025-08-11 19:35:31,029 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:35:31,701 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:35:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6888/7340 [249:13<16:21, 27.6 steps/min]2025-08-11 19:35:32,380 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:35:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6888/7340 [249:16<16:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:36,070 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:35:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6888/7340 [249:18<16:21, 27.6 steps/min]\u001b[92m19:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:38,856 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:35:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:35:39,552 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6888/7340 [249:21<16:21, 27.6 steps/min]2025-08-11 19:35:40,920 - agent.ComputerAgent - INFO - Computer: click({'x': 320, 'y': 56})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 320, 'y': 56})\n",
+ "2025-08-11 19:35:41,609 - agent.ComputerAgent - INFO - Computer: click({'x': 524, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 524, 'y': 164})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 94%|█████████████████████████████████████---| 6888/7340 [249:23<16:21, 27.6 steps/min]\u001b[92m19:35:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:42,902 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 183})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:35:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6890/7340 [249:25<16:17, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:44,254 - agent.ComputerAgent - INFO - Computer: click({'x': 971, 'y': 760})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 971, 'y': 760})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:45,557 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+ALT+T'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+ALT+T'})\n",
+ "\u001b[92m19:35:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6891/7340 [249:28<16:15, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:46,930 - agent.ComputerAgent - INFO - Computer: click({'x': 212, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 212, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:35:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:35:48,241 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:35:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:35:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:49,549 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6892/7340 [249:32<16:13, 27.6 steps/min]\u001b[92m19:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:50,903 - agent.ComputerAgent - INFO - Computer: click({'x': 542, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 542, 'y': 232})\n",
+ "\u001b[92m19:35:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:51,582 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:35:51,582 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 311})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 311})\n",
+ "\u001b[92m19:35:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6894/7340 [249:33<16:08, 27.6 steps/min]2025-08-11 19:35:52,242 - agent.ComputerAgent - INFO - Computer: click({'x': 728, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 728, 'y': 277})\n",
+ "2025-08-11 19:35:52,901 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:35:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6896/7340 [249:34<16:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:35:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:35:54,261 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:35:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6897/7340 [249:36<16:01, 27.6 steps/min]\u001b[92m19:35:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:35:54,925 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 152})\n",
+ "2025-08-11 19:35:55,632 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:35:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6898/7340 [249:37<15:59, 27.6 steps/min]2025-08-11 19:35:56,297 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:35:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6898/7340 [249:38<15:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:57,855 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:35:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6898/7340 [249:39<15:59, 27.6 steps/min]2025-08-11 19:35:58,541 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:35:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:35:59,906 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:00,560 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:36:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6898/7340 [249:42<16:00, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:02,971 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6899/7340 [249:44<15:57, 27.6 steps/min]2025-08-11 19:36:03,640 - agent.ComputerAgent - INFO - Computer: click({'x': 843, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 843, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:04,982 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:36:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6900/7340 [249:46<15:55, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:05,672 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:36:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:06,342 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6901/7340 [249:48<15:53, 27.6 steps/min]2025-08-11 19:36:07,045 - agent.ComputerAgent - INFO - Computer: click({'x': 525, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 525, 'y': 202})\n",
+ "2025-08-11 19:36:07,732 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:36:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6901/7340 [249:50<15:53, 27.6 steps/min]\u001b[92m19:36:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:36:09,102 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:36:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:09,771 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:36:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6902/7340 [249:51<15:51, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:10,445 - agent.ComputerAgent - INFO - Computer: click({'x': 125, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 125, 'y': 185})\n",
+ " 94%|█████████████████████████████████████---| 6902/7340 [249:52<15:51, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6903/7340 [249:53<15:49, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:13,322 - agent.ComputerAgent - INFO - Computer: click({'x': 697, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 697, 'y': 203})\n",
+ " 94%|█████████████████████████████████████---| 6903/7340 [249:55<15:49, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:13,990 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:36:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:15,290 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.F2:F11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.F2:F11)'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:16,624 - agent.ComputerAgent - INFO - Computer: type({'text': '=A2/B2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=A2/B2'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6904/7340 [249:59<15:47, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:19,043 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'SUPER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'SUPER'})\n",
+ "\u001b[92m19:36:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:21,052 - agent.ComputerAgent - INFO - Computer: type({'text': 'Vienna'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Vienna'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/reset \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6906/7340 [250:02<15:42, 27.6 steps/min]2025-08-11 19:36:21,664 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 991, 'y': 400})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 991, 'y': 400})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:22,332 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:36:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:23,741 - agent.ComputerAgent - INFO - Agent: The image has been set to Palette-Based (Indexed color). Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The image has been set to Palette-Based (Indexed color). Task completed.\n",
+ "2025-08-11 19:36:24,394 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 216\n",
+ " - prompt_tokens: 14169\n",
+ " - total_tokens: 14385\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 3200\n",
+ " - response_cost: $0.0163\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 216\n",
+ " - prompt_tokens: 14169\n",
+ " - total_tokens: 14385\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 3200\n",
+ " - response_cost: $0.0163\n",
+ "2025-08-11 19:36:25,063 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6909/7340 [250:06<15:36, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:36:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:27,045 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 314})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 94%|█████████████████████████████████████---| 6910/7340 [250:09<15:34, 27.6 steps/min]\u001b[92m19:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:36:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:28,884 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ "\u001b[92m19:36:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ " 94%|█████████████████████████████████████---| 6911/7340 [250:10<15:31, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:29,567 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:30,222 - agent.ComputerAgent - INFO - Computer: click({'x': 702, 'y': 315})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 702, 'y': 315})\n",
+ " 94%|█████████████████████████████████████---| 6911/7340 [250:11<15:31, 27.6 steps/min]\u001b[92m19:36:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:31,315 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 204, 'y': 94}, {'x': 284, 'y': 396}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 204, 'y': 94}, {'x': 284, 'y': 396}]})\n",
+ " 94%|█████████████████████████████████████---| 6913/7340 [250:13<15:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:33,253 - agent.ComputerAgent - INFO - Computer: click({'x': 346, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 346, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6914/7340 [250:14<15:25, 27.6 steps/min]2025-08-11 19:36:34,377 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:36:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:16<15:22, 27.6 steps/min]2025-08-11 19:36:35,085 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:35,867 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:36:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:17<15:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:36,552 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:36:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:37,241 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:18<15:23, 27.6 steps/min]2025-08-11 19:36:37,883 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:36:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:36:38,515 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:36:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:20<15:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/25f45afe-ee57-4629-9991-c515438accab/reset \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:39,862 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:36:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:40,551 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:36:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:22<15:23, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:42,304 - agent.ComputerAgent - INFO - Computer: click({'x': 407, 'y': 374})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 407, 'y': 374})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:42,972 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6915/7340 [250:24<15:23, 27.6 steps/min]2025-08-11 19:36:43,653 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:44,333 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:36:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:45,028 - agent.ComputerAgent - INFO - Computer: click({'x': 499, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 499, 'y': 524})\n",
+ " 94%|█████████████████████████████████████---| 6916/7340 [250:26<15:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6917/7340 [250:28<15:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:36:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:47,037 - agent.ComputerAgent - INFO - Computer: click({'x': 620, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 620, 'y': 105})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:48,403 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ " 94%|█████████████████████████████████████---| 6917/7340 [250:30<15:19, 27.6 steps/min]\u001b[92m19:36:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:36:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:49,076 - agent.ComputerAgent - INFO - Computer: click({'x': 463, 'y': 339})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 463, 'y': 339})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:36:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:51,120 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 94%|█████████████████████████████████████---| 6918/7340 [250:32<15:17, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:51,794 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:36:51,795 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 205})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:54,503 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:36:54,505 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 94%|█████████████████████████████████████---| 6920/7340 [250:36<15:12, 27.6 steps/min]2025-08-11 19:36:55,180 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 602, 'scroll_x': 0, 'x': 993, 'y': 378})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 602, 'scroll_x': 0, 'x': 993, 'y': 378})\n",
+ "\u001b[92m19:36:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:36:55,838 - agent.ComputerAgent - INFO - Computer: double_click({'x': 986, 'y': 578})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 986, 'y': 578})\n",
+ " 94%|█████████████████████████████████████---| 6922/7340 [250:37<15:08, 27.6 steps/min]\u001b[92m19:36:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:56,494 - agent.ComputerAgent - INFO - Computer: click({'x': 758, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 758, 'y': 437})\n",
+ "2025-08-11 19:36:57,153 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 94%|█████████████████████████████████████---| 6924/7340 [250:38<15:03, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:36:57,830 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:36:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:36:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6925/7340 [250:40<15:01, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:36:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:36:59,672 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6925/7340 [250:42<15:01, 27.6 steps/min]\u001b[92m19:37:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:01,022 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:37:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:37:01,711 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:37:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:37:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6926/7340 [250:43<14:59, 27.6 steps/min]2025-08-11 19:37:02,368 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 306, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 306, 'y': 157})\n",
+ "2025-08-11 19:37:03,430 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:37:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6926/7340 [250:45<14:59, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:04,095 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:37:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:37:04,744 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:37:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6927/7340 [250:46<14:57, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:05,452 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:37:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:06,766 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+SHIFT+C'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+SHIFT+C'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6927/7340 [250:49<14:57, 27.6 steps/min]\u001b[92m19:37:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:09,838 - agent.ComputerAgent - INFO - Computer: type({'text': 'Vienna'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Vienna'})\n",
+ "2025-08-11 19:37:10,504 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:37:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:11,885 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6927/7340 [250:53<14:57, 27.6 steps/min]2025-08-11 19:37:12,543 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:37:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:14,559 - agent.ComputerAgent - INFO - Computer: type({'text': '=SUM(Sheet1.G2:G11)'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=SUM(Sheet1.G2:G11)'})\n",
+ "2025-08-11 19:37:15,223 - agent.ComputerAgent - INFO - Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 543, 'y': 50})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6929/7340 [250:57<14:53, 27.6 steps/min]\u001b[92m19:37:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:16,594 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:37:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:17,244 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:37:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:37:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6931/7340 [250:59<14:48, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:17,958 - agent.ComputerAgent - INFO - Computer: click({'x': 166, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 166, 'y': 427})\n",
+ "\u001b[92m19:37:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:18,638 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 603, 'scroll_x': 0, 'x': 998, 'y': 454})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 603, 'scroll_x': 0, 'x': 998, 'y': 454})\n",
+ "\u001b[92m19:37:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6931/7340 [251:00<14:48, 27.6 steps/min]2025-08-11 19:37:19,280 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 136, 'y': 187}, {'x': 129, 'y': 272}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 136, 'y': 187}, {'x': 129, 'y': 272}]})\n",
+ "2025-08-11 19:37:19,913 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:37:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 94%|█████████████████████████████████████---| 6933/7340 [251:02<14:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:21,997 - agent.ComputerAgent - INFO - Computer: double_click({'x': 86, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 86, 'y': 314})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:23,393 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 94%|█████████████████████████████████████---| 6934/7340 [251:05<14:42, 27.6 steps/min]\u001b[92m19:37:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:24,076 - agent.ComputerAgent - INFO - Computer: click({'x': 698, 'y': 238})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 698, 'y': 238})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ " 94%|█████████████████████████████████████---| 6936/7340 [251:06<14:37, 27.6 steps/min]2025-08-11 19:37:25,243 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:37:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6937/7340 [251:07<14:35, 27.6 steps/min]2025-08-11 19:37:25,902 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:37:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:37:26,562 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:37:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6937/7340 [251:09<14:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:28,534 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:37:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:29,911 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 19:37:30,565 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/f209659e-2ed9-40fa-ae61-5359fb9ef290/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6938/7340 [251:12<14:33, 27.6 steps/min]2025-08-11 19:37:31,259 - agent.ComputerAgent - INFO - Computer: click({'x': 349, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 349, 'y': 77})\n",
+ "2025-08-11 19:37:31,946 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 10})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 10})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:33,927 - agent.ComputerAgent - INFO - Computer: type({'text': 'terminal'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'terminal'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6939/7340 [251:15<14:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:37:35,264 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:37:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 502 Bad Gateway\"\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:17<14:24, 27.6 steps/min]2025-08-11 19:37:35,950 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:37:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7a2c2835-d21e-4e04-babb-e8305a4f1f9d/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/963f0b0a-47d1-479c-9077-6c59023108fe/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:19<14:24, 27.6 steps/min]2025-08-11 19:37:38,515 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:37:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.72s/it]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:21<14:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:40,573 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:37:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.65s/it]27.6 steps/min]2025-08-11 19:37:41,312 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:37:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:23<14:24, 27.6 steps/min]\n",
+ "2025-08-11 19:37:42,541 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:37:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:24<14:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:25<14:24, 27.6 steps/min]\u001b[92m19:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:37:45,545 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:37:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:37:46,177 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 184})\n",
+ " 95%|█████████████████████████████████████---| 6942/7340 [251:27<14:25, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.96s/it]2025-08-11 19:37:48,145 - agent.ComputerAgent - INFO - Computer: type({'text': 'path=$(find / -type f -name \"secret.docx\" 2>/dev/null | head -n1); printf \"%s\\\\n\" \"$path\"; echo -n \"$path\" | xclip -selection clipboard 2>/dev/null || echo -n \"$path\" | wl-copy 2>/dev/null || true'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'path=$(find / -type f -name \"secret.docx\" 2>/dev/null | head -n1); printf \"%s\\\\n\" \"$path\"; echo -n \"$path\" | xclip -selection clipboard 2>/dev/null || echo -n \"$path\" | wl-copy 2>/dev/null || true'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.76s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.72s/it]2025-08-11 19:37:50,401 - agent.ComputerAgent - INFO - Agent: The video has been converted to an MP3 and saved on the Desktop as “Baby Justin Bieber.mp3.” You can now play it anytime.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The video has been converted to an MP3 and saved on the Desktop as “Baby Justin Bieber.mp3.” You can now play it anytime.\n",
+ "\n",
+ "Task completed\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.45s/it]\n",
+ "2025-08-11 19:37:51,060 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 488\n",
+ " - prompt_tokens: 12887\n",
+ " - total_tokens: 13375\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0210\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 488\n",
+ " - prompt_tokens: 12887\n",
+ " - total_tokens: 13375\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 448\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0210\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:37:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:37:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6944/7340 [251:34<14:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:53,761 - agent.ComputerAgent - INFO - Computer: click({'x': 515, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 515, 'y': 232})\n",
+ "\u001b[92m19:37:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6945/7340 [251:35<14:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:54,423 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 989, 'y': 366})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 989, 'y': 366})\n",
+ "\u001b[92m19:37:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:55,120 - agent.ComputerAgent - INFO - Computer: double_click({'x': 94, 'y': 453})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 94, 'y': 453})\n",
+ "\u001b[92m19:37:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6946/7340 [251:36<14:16, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:55,799 - agent.ComputerAgent - INFO - Computer: click({'x': 153, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 153, 'y': 52})\n",
+ "\u001b[92m19:37:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:56,468 - agent.ComputerAgent - INFO - Computer: double_click({'x': 354, 'y': 136})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 354, 'y': 136})\n",
+ "\u001b[92m19:37:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6948/7340 [251:38<14:11, 27.6 steps/min]2025-08-11 19:37:57,837 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 205})\n",
+ "\u001b[92m19:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:37:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:37:58,540 - agent.ComputerAgent - INFO - Computer: click({'x': 945, 'y': 500})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 945, 'y': 500})\n",
+ "2025-08-11 19:37:59,244 - agent.ComputerAgent - INFO - Computer: double_click({'x': 989, 'y': 713})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 989, 'y': 713})\n",
+ " 95%|█████████████████████████████████████---| 6950/7340 [251:40<14:07, 27.6 steps/min]\u001b[92m19:37:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:37:59,918 - agent.ComputerAgent - INFO - Computer: click({'x': 913, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 913, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:00,585 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:38:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6953/7340 [251:42<14:00, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6954/7340 [251:43<13:58, 27.6 steps/min]2025-08-11 19:38:02,285 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:38:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6954/7340 [251:44<13:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6954/7340 [251:45<13:58, 27.6 steps/min]2025-08-11 19:38:04,439 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:38:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6954/7340 [251:46<13:58, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:05,613 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:38:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:07,021 - agent.ComputerAgent - INFO - Agent: I configured your system to lock automatically when you’re away.\n",
+ "\n",
+ "What I set\n",
+ "- Enabled Automatic Screen Lock.\n",
+ "- Set Blank Screen Delay to 5 minutes, so after 5 minutes of inactivity the screen turns off and locks immediately.\n",
+ "- Kept “Lock screen on Suspend” enabled, so any suspend will also require a password on wake.\n",
+ "\n",
+ "How to adjust later\n",
+ "- Settings > Privacy > Screen:\n",
+ " - Change “Blank Screen Delay” to your preferred timeout (e.g., 1–2 minutes for tighter security).\n",
+ " - “Automatic Screen Lock Delay” is set to lock as soon as the screen turns off.\n",
+ "\n",
+ "Tip\n",
+ "- Press Super+L anytime to lock instantly.\n",
+ "\n",
+ "Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: I configured your system to lock automatically when you’re away.\n",
+ "\n",
+ "What I set\n",
+ "- Enabled Automatic Screen Lock.\n",
+ "- Set Blank Screen Delay to 5 minutes, so after 5 minutes of inactivity the screen turns off and locks immediately.\n",
+ "- Kept “Lock screen on Suspend” enabled, so any suspend will also require a password on wake.\n",
+ "\n",
+ "How to adjust later\n",
+ "- Settings > Privacy > Screen:\n",
+ " - Change “Blank Screen Delay” to your preferred timeout (e.g., 1–2 minutes for tighter security).\n",
+ " - “Automatic Screen Lock Delay” is set to lock as soon as the screen turns off.\n",
+ "\n",
+ "Tip\n",
+ "- Press Super+L anytime to lock instantly.\n",
+ "\n",
+ "Task completed.\n",
+ "2025-08-11 19:38:07,641 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 790\n",
+ " - prompt_tokens: 10768\n",
+ " - total_tokens: 11558\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0214\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 790\n",
+ " - prompt_tokens: 10768\n",
+ " - total_tokens: 11558\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 640\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0214\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6955/7340 [251:49<13:56, 27.6 steps/min]2025-08-11 19:38:08,315 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:38:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:38:09,672 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:38:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 95%|█████████████████████████████████████---| 6955/7340 [251:52<13:56, 27.6 steps/min]\u001b[92m19:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:38:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:11,032 - agent.ComputerAgent - INFO - Computer: click({'x': 375, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 375, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:38:11,731 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 202})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 202})\n",
+ " 95%|█████████████████████████████████████---| 6958/7340 [251:53<13:49, 27.6 steps/min]2025-08-11 19:38:12,415 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:38:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:13,115 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:38:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6960/7340 [251:54<13:45, 27.6 steps/min]2025-08-11 19:38:13,805 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:38:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/af58ffed-65a3-4c4a-a9fe-5c940230627d/close \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:14,486 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:38:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:38:15,845 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:38:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6960/7340 [251:57<13:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:17,724 - agent.ComputerAgent - INFO - Computer: type({'text': 'Vienna'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Vienna'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:19,084 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6960/7340 [252:00<13:45, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:20,825 - agent.ComputerAgent - INFO - Computer: click({'x': 342, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 342, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6969/7340 [252:02<13:25, 27.7 steps/min]2025-08-11 19:38:21,487 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:38:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:38:22,165 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:38:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6970/7340 [252:05<13:22, 27.6 steps/min]\u001b[92m19:38:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<13:22, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7633715b-dde0-4c56-a4b6-22ccee78f5f5/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6970/7340 [252:08<13:23, 27.6 steps/min]\u001b[92m19:38:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6970/7340 [252:09<13:23, 27.6 steps/min]2025-08-11 19:38:28,306 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:38:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6970/7340 [252:10<13:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:04<00:04, 2.22s/it]2025-08-11 19:38:29,766 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:38:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6970/7340 [252:11<13:23, 27.6 steps/min]2025-08-11 19:38:30,434 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:38:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:06<00:00, 1.66s/it]27.6 steps/min]\n",
+ "2025-08-11 19:38:31,614 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:38:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]\u001b[92m19:38:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:14<13:21, 27.6 steps/min]\u001b[92m19:38:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:15<13:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:34,841 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:38:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:17<13:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:10<00:00, 2.61s/it]27.6 steps/min]\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:19<13:21, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:38:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:21<13:21, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:07<00:07, 3.52s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:09<00:02, 2.64s/it]2025-08-11 19:38:41,780 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+p'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:09<00:00, 2.45s/it]\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:24<13:21, 27.6 steps/min]\u001b[92m19:38:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:38:43,059 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:38:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:25<13:21, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:26<13:21, 27.6 steps/min]\u001b[92m19:38:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:45,359 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 205})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 205})\n",
+ " 95%|█████████████████████████████████████---| 6971/7340 [252:27<13:21, 27.6 steps/min]\u001b[92m19:38:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:46,582 - agent.ComputerAgent - INFO - Computer: click({'x': 103, 'y': 398})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 103, 'y': 398})\n",
+ " 95%|█████████████████████████████████████---| 6972/7340 [252:28<13:19, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:47,931 - agent.ComputerAgent - INFO - Computer: type({'text': \"sudo -n true 2>/dev/null || echo 'Requesting sudo password if needed'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"sudo -n true 2>/dev/null || echo 'Requesting sudo password if needed'\"})\n",
+ " 95%|██████████████████████████████████████--| 6973/7340 [252:29<13:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:49,132 - agent.ComputerAgent - INFO - Computer: click({'x': 178, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 178, 'y': 305})\n",
+ " 95%|██████████████████████████████████████--| 6974/7340 [252:30<13:15, 27.6 steps/min]\u001b[92m19:38:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:38:49,800 - agent.ComputerAgent - INFO - Computer: click({'x': 932, 'y': 296})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 932, 'y': 296})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6975/7340 [252:32<13:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6976/7340 [252:33<13:10, 27.6 steps/min]\u001b[92m19:38:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:52,154 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 313})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 313})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:38:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/fa081188-4859-4858-9d33-0f9675111182/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6976/7340 [252:35<13:10, 27.6 steps/min]\u001b[92m19:38:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:38:54,176 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:38:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:38:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:38:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6977/7340 [252:36<13:08, 27.6 steps/min]2025-08-11 19:38:55,531 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:38:55,532 - agent.ComputerAgent - INFO - Computer: double_click({'x': 379, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 379, 'y': 105})\n",
+ "2025-08-11 19:38:56,196 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:38:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 6977/7340 [252:37<13:08, 27.6 steps/min]2025-08-11 19:38:56,856 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:38:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:38:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:38:57,552 - agent.ComputerAgent - INFO - Computer: click({'x': 351, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 351, 'y': 75})\n",
+ " 95%|██████████████████████████████████████--| 6978/7340 [252:39<13:06, 27.6 steps/min]2025-08-11 19:38:58,981 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6979/7340 [252:40<13:04, 27.6 steps/min]2025-08-11 19:38:59,669 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:38:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6979/7340 [252:42<13:04, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:02,357 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:39:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:03,722 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6979/7340 [252:45<13:04, 27.6 steps/min]2025-08-11 19:39:04,403 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 993, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 993, 'y': 732})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:05,068 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:39:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6980/7340 [252:46<13:02, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:06,128 - agent.ComputerAgent - INFO - Computer: click({'x': 316, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 316, 'y': 183})\n",
+ "2025-08-11 19:39:06,806 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:39:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:08,174 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6981/7340 [252:50<13:00, 27.6 steps/min]\u001b[92m19:39:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:10,148 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:39:10,149 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'meta'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'meta'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:11,496 - agent.ComputerAgent - INFO - Computer: click({'x': 569, 'y': 372})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 569, 'y': 372})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:12,817 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 95%|██████████████████████████████████████--| 6983/7340 [252:54<12:55, 27.6 steps/min]2025-08-11 19:39:13,521 - agent.ComputerAgent - INFO - Computer: click({'x': 81, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 81, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:39:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:14,827 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 210, 'y': 177}, {'x': 210, 'y': 457}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 210, 'y': 177}, {'x': 210, 'y': 457}]})\n",
+ "2025-08-11 19:39:15,498 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:39:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:39:16,177 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 95%|██████████████████████████████████████--| 6985/7340 [252:57<12:51, 27.6 steps/min]\u001b[92m19:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6987/7340 [252:58<12:46, 27.6 steps/min]\u001b[92m19:39:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:17,905 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 321})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 321})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6988/7340 [253:01<12:44, 27.6 steps/min]\u001b[92m19:39:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:20,770 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 430})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6988/7340 [253:02<12:44, 27.6 steps/min]2025-08-11 19:39:21,798 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:39:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6989/7340 [253:03<12:42, 27.6 steps/min]2025-08-11 19:39:22,498 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:39:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:23,147 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6989/7340 [253:04<12:42, 27.6 steps/min]2025-08-11 19:39:23,865 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 209, 'y': 146}, {'x': 281, 'y': 396}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 209, 'y': 146}, {'x': 281, 'y': 396}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:39:24,516 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:39:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6989/7340 [253:06<12:42, 27.6 steps/min]2025-08-11 19:39:25,167 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:27,161 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "2025-08-11 19:39:27,846 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:39:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:39:28,870 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:39:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/1a178f89-87e5-46d9-a114-22d5fcc5c630/close \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6990/7340 [253:10<12:40, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:30,222 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 19:39:30,916 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 314})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6991/7340 [253:13<12:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:33,548 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:39:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6992/7340 [253:15<12:36, 27.6 steps/min]2025-08-11 19:39:34,246 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:39:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:39:34,914 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 305})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 305})\n",
+ "2025-08-11 19:39:35,577 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:39:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 6992/7340 [253:17<12:36, 27.6 steps/min]2025-08-11 19:39:36,257 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:39:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]<12:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:20<12:34, 27.6 steps/min]\u001b[92m19:39:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:21<12:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.61s/it]2025-08-11 19:39:40,827 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:39:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:22<12:34, 27.6 steps/min]2025-08-11 19:39:41,507 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:39:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.62s/it]2025-08-11 19:39:42,387 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:39:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:24<12:34, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.19s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.35s/it]\n",
+ "\u001b[92m19:39:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:25<12:34, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:26<12:34, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:45,816 - agent.ComputerAgent - INFO - Computer: click({'x': 570, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 570, 'y': 105})\n",
+ " 95%|██████████████████████████████████████--| 6993/7340 [253:27<12:34, 27.6 steps/min]\u001b[92m19:39:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:46,459 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 134})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 134})\n",
+ "\u001b[92m19:39:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:47,134 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 628, 'scroll_x': 0, 'x': 996, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 628, 'scroll_x': 0, 'x': 996, 'y': 731})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6994/7340 [253:29<12:32, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:39:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:48,461 - agent.ComputerAgent - INFO - Computer: click({'x': 187, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 187, 'y': 52})\n",
+ "\u001b[92m19:39:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:49,144 - agent.ComputerAgent - INFO - Computer: double_click({'x': 374, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 374, 'y': 105})\n",
+ "\u001b[92m19:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 6996/7340 [253:30<12:27, 27.6 steps/min]2025-08-11 19:39:49,826 - agent.ComputerAgent - INFO - Computer: click({'x': 693, 'y': 203})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 693, 'y': 203})\n",
+ "\u001b[92m19:39:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:50,471 - agent.ComputerAgent - INFO - Computer: click({'x': 308, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 308, 'y': 186})\n",
+ " 95%|██████████████████████████████████████--| 6998/7340 [253:32<12:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 7000/7340 [253:33<12:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:39:53,008 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 7000/7340 [253:35<12:19, 27.6 steps/min]\u001b[92m19:39:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:39:54,334 - agent.ComputerAgent - INFO - Computer: click({'x': 998, 'y': 731})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 998, 'y': 731})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:55,398 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:39:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:39:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:39:56,726 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo id'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo id'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:39:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 7000/7340 [253:39<12:19, 27.6 steps/min]2025-08-11 19:39:58,020 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 184})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:39:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 7002/7340 [253:40<12:14, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:39:59,337 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:39:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:00,358 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:40:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:01,048 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:01,728 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:40:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 7003/7340 [253:43<12:12, 27.6 steps/min]2025-08-11 19:40:02,405 - agent.ComputerAgent - INFO - Computer: click({'x': 294, 'y': 397})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 294, 'y': 397})\n",
+ "2025-08-11 19:40:03,087 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 314})\n",
+ "2025-08-11 19:40:03,749 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:40:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 7003/7340 [253:45<12:12, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:04,774 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:40:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 7005/7340 [253:46<12:08, 27.6 steps/min]2025-08-11 19:40:05,430 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:40:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 7005/7340 [253:49<12:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:09,139 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:40:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 7005/7340 [253:50<12:08, 27.6 steps/min]2025-08-11 19:40:09,809 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:40:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:10,514 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:40:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 95%|██████████████████████████████████████--| 7005/7340 [253:52<12:08, 27.6 steps/min]2025-08-11 19:40:11,199 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:40:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:12,906 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 95%|██████████████████████████████████████--| 7005/7340 [253:54<12:08, 27.6 steps/min]2025-08-11 19:40:14,373 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:40:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:16,230 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 95%|██████████████████████████████████████--| 7006/7340 [253:57<12:06, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:18,266 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://flags/#chrome-refresh-2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://flags/#chrome-refresh-2023'})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:18,934 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ " 95%|██████████████████████████████████████--| 7006/7340 [254:00<12:06, 27.6 steps/min]\u001b[92m19:40:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:40:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:19,632 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 1002, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 1002, 'y': 732})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 95%|██████████████████████████████████████--| 7007/7340 [254:02<12:04, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:40:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:21,501 - agent.ComputerAgent - INFO - Computer: click({'x': 758, 'y': 437})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 758, 'y': 437})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 95%|██████████████████████████████████████--| 7008/7340 [254:03<12:02, 27.6 steps/min]\u001b[92m19:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:24,119 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:25,413 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 95%|██████████████████████████████████████--| 7009/7340 [254:07<12:00, 27.6 steps/min]2025-08-11 19:40:26,092 - agent.ComputerAgent - INFO - Computer: click({'x': 88, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 88, 'y': 314})\n",
+ "\u001b[92m19:40:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:26,786 - agent.ComputerAgent - INFO - Computer: click({'x': 357, 'y': 277})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 357, 'y': 277})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7011/7340 [254:08<11:55, 27.6 steps/min]2025-08-11 19:40:27,434 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:40:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:28,123 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:40:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7013/7340 [254:10<11:51, 27.6 steps/min]\u001b[92m19:40:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:30,124 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "2025-08-11 19:40:30,753 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:40:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:32,096 - agent.ComputerAgent - INFO - Computer: type({'text': '=PROPER(TRIM(A2))'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=PROPER(TRIM(A2))'})\n",
+ " 96%|██████████████████████████████████████--| 7013/7340 [254:13<11:51, 27.6 steps/min]2025-08-11 19:40:32,781 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 81})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 81})\n",
+ "2025-08-11 19:40:33,432 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:40:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:34,772 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+V'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+V'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7014/7340 [254:17<11:49, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:36,061 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:40:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:36,744 - agent.ComputerAgent - INFO - Computer: click({'x': 489, 'y': 62})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 489, 'y': 62})\n",
+ " 96%|██████████████████████████████████████--| 7015/7340 [254:18<11:46, 27.6 steps/min]2025-08-11 19:40:37,449 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:40:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7016/7340 [254:20<11:44, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:40:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:39,350 - agent.ComputerAgent - INFO - Computer: click({'x': 644, 'y': 625})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 644, 'y': 625})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7016/7340 [254:21<11:44, 27.6 steps/min]\u001b[92m19:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:41,284 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7017/7340 [254:23<11:42, 27.6 steps/min]\u001b[92m19:40:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:41,958 - agent.ComputerAgent - INFO - Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 20, 'y': 93})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:42,594 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:40:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7017/7340 [254:24<11:42, 27.6 steps/min]2025-08-11 19:40:43,655 - agent.ComputerAgent - INFO - Computer: click({'x': 209, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 209, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7018/7340 [254:25<11:40, 27.6 steps/min]2025-08-11 19:40:44,334 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:40:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:44,994 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7019/7340 [254:26<11:38, 27.6 steps/min]2025-08-11 19:40:45,667 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:40:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:46,981 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7019/7340 [254:28<11:38, 27.6 steps/min]2025-08-11 19:40:48,357 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:40:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:40:49,760 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome://flags/#chrome-refresh-2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome://flags/#chrome-refresh-2023'})\n",
+ "2025-08-11 19:40:50,449 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ " 96%|██████████████████████████████████████--| 7019/7340 [254:32<11:38, 27.6 steps/min]\u001b[92m19:40:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:51,836 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:40:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:40:52,528 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ " 96%|██████████████████████████████████████--| 7020/7340 [254:34<11:36, 27.6 steps/min]\u001b[92m19:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:40:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:54,560 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 991, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 991, 'y': 433})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7020/7340 [254:36<11:36, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:55,249 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:55,955 - agent.ComputerAgent - INFO - Computer: click({'x': 401, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 401, 'y': 75})\n",
+ " 96%|██████████████████████████████████████--| 7021/7340 [254:37<11:34, 27.6 steps/min]\u001b[92m19:40:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:40:56,624 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:40:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:58,622 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7022/7340 [254:41<11:32, 27.6 steps/min]\u001b[92m19:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:40:59,950 - agent.ComputerAgent - INFO - Computer: click({'x': 89, 'y': 314, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 89, 'y': 314, 'button': 'left'})\n",
+ "\u001b[92m19:40:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:00,599 - agent.ComputerAgent - INFO - Computer: click({'x': 400, 'y': 398})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 400, 'y': 398})\n",
+ " 96%|██████████████████████████████████████--| 7024/7340 [254:42<11:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:01,258 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:41:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/c915dbd9-32bc-40a7-9c07-d437c737419f/close \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7026/7340 [254:43<11:23, 27.6 steps/min]INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:03,814 - agent.ComputerAgent - INFO - Computer: type({'text': 'Yann LeCun Google Scholar'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Yann LeCun Google Scholar'})\n",
+ " 96%|██████████████████████████████████████--| 7026/7340 [254:45<11:23, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:05,700 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7027/7340 [254:47<11:20, 27.6 steps/min]2025-08-11 19:41:06,388 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:41:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:07,039 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:48<11:18, 27.6 steps/min]2025-08-11 19:41:07,728 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:41:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:08,429 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:41:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:50<11:18, 27.6 steps/min]2025-08-11 19:41:09,118 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:41:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:09,818 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:41:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:51<11:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:52<11:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:53<11:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:54<11:19, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:14,390 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:41:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:56<11:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:41:15,049 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:41:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:57<11:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.69s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.62s/it]2025-08-11 19:41:17,969 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 96%|██████████████████████████████████████--| 7028/7340 [254:59<11:19, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.60s/it]2025-08-11 19:41:19,480 - agent.ComputerAgent - INFO - Computer: type({'text': 'set -e\\nsudo mkdir -p /home/test1/uploads\\nsudo chown root:root /home/test1\\nsudo chmod 755 /home/test1\\n\\n# Create the user for SFTP-only access chrooted to /home/test1\\nif id -u charles >/dev/null 2>&1; then\\n echo \\'User charles already exists\\'\\nelse\\n sudo useradd -M -s /usr/sbin/nologin -d / charles\\nfi\\n\\necho \\'charles:Ex@mpleP@55w0rd!\\' | sudo chpasswd\\n\\n# Give the user write access inside the jail via a subdirectory\\nsudo mkdir -p /home/test1/uploads\\nsudo chown charles:charles /home/test1/uploads\\n\\n# Configure OpenSSH to chroot the user and force SFTP only\\nsudo bash -lc \\'cat >>/etc/ssh/sshd_config <<\"EOF\"\\n\\nMatch User charles\\n ChrootDirectory /home/test1\\n ForceCommand internal-sftp\\n AllowTCPForwarding no\\n X11Forwarding no\\nEOF\\'\\n\\n# Restart SSH to apply changes\\nsudo systemctl restart ssh || sudo systemctl restart sshd\\n\\necho \\'Setup complete: user charles restricted to /home/test1 via SFTP.\\''})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'set -e\\nsudo mkdir -p /home/test1/uploads\\nsudo chown root:root /home/test1\\nsudo chmod 755 /home/test1\\n\\n# Create the user for SFTP-only access chrooted to /home/test1\\nif id -u charles >/dev/null 2>&1; then\\n echo \\'User charles already exists\\'\\nelse\\n sudo useradd -M -s /usr/sbin/nologin -d / charles\\nfi\\n\\necho \\'charles:Ex@mpleP@55w0rd!\\' | sudo chpasswd\\n\\n# Give the user write access inside the jail via a subdirectory\\nsudo mkdir -p /home/test1/uploads\\nsudo chown charles:charles /home/test1/uploads\\n\\n# Configure OpenSSH to chroot the user and force SFTP only\\nsudo bash -lc \\'cat >>/etc/ssh/sshd_config <<\"EOF\"\\n\\nMatch User charles\\n ChrootDirectory /home/test1\\n ForceCommand internal-sftp\\n AllowTCPForwarding no\\n X11Forwarding no\\nEOF\\'\\n\\n# Restart SSH to apply changes\\nsudo systemctl restart ssh || sudo systemctl restart sshd\\n\\necho \\'Setup complete: user charles restricted to /home/test1 via SFTP.\\''})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.34s/it]\n",
+ "\u001b[92m19:41:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7029/7340 [255:01<11:17, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7030/7340 [255:03<11:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:22,776 - agent.ComputerAgent - INFO - Computer: click({'x': 187, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 187, 'y': 52})\n",
+ " 96%|██████████████████████████████████████--| 7030/7340 [255:04<11:14, 27.6 steps/min]\u001b[92m19:41:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:23,473 - agent.ComputerAgent - INFO - Computer: click({'x': 540, 'y': 471})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 540, 'y': 471})\n",
+ "\u001b[92m19:41:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:24,156 - agent.ComputerAgent - INFO - Computer: click({'x': 865, 'y': 201})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 865, 'y': 201})\n",
+ "\u001b[92m19:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7031/7340 [255:06<11:12, 27.6 steps/min]\u001b[92m19:41:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:25,511 - agent.ComputerAgent - INFO - Computer: click({'x': 91, 'y': 314, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 91, 'y': 314, 'button': 'left'})\n",
+ "2025-08-11 19:41:26,190 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 990, 'y': 709})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 640, 'scroll_x': 0, 'x': 990, 'y': 709})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:26,861 - agent.ComputerAgent - INFO - Computer: click({'x': 13, 'y': 673})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 13, 'y': 673})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:28,168 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'F11'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'F11'})\n",
+ " 96%|██████████████████████████████████████--| 7033/7340 [255:09<11:08, 27.6 steps/min]2025-08-11 19:41:28,840 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:41:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:29,543 - agent.ComputerAgent - INFO - Computer: click({'x': 461, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 461, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7037/7340 [255:11<10:59, 27.6 steps/min]\u001b[92m19:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:30,890 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:41:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:31,564 - agent.ComputerAgent - INFO - Computer: click({'x': 510, 'y': 283})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 510, 'y': 283})\n",
+ " 96%|██████████████████████████████████████--| 7039/7340 [255:16<10:54, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:35,309 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:41:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:36,009 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7039/7340 [255:17<10:55, 27.6 steps/min]2025-08-11 19:41:36,709 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:41:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:37,392 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:41:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:38,081 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:41:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7039/7340 [255:21<10:55, 27.6 steps/min]\u001b[92m19:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:41:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:41,479 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:41:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:41:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:42,130 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7039/7340 [255:23<10:55, 27.6 steps/min]\u001b[92m19:41:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:42,794 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ "2025-08-11 19:41:43,448 - agent.ComputerAgent - INFO - Computer: click({'x': 223, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 223, 'y': 35})\n",
+ "\u001b[92m19:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:41:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7039/7340 [255:25<10:55, 27.6 steps/min]2025-08-11 19:41:44,120 - agent.ComputerAgent - INFO - Computer: click({'x': 343, 'y': 195})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 343, 'y': 195})\n",
+ "2025-08-11 19:41:44,810 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 209, 'y': 146}, {'x': 407, 'y': 399}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 209, 'y': 146}, {'x': 407, 'y': 399}]})\n",
+ " 96%|██████████████████████████████████████--| 7041/7340 [255:26<10:50, 27.6 steps/min]2025-08-11 19:41:45,450 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:41:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:46,139 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:41:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7043/7340 [255:30<10:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:50,003 - agent.ComputerAgent - INFO - Computer: type({'text': 'chrome refresh 2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'chrome refresh 2023'})\n",
+ " 96%|██████████████████████████████████████--| 7043/7340 [255:31<10:46, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:41:51,944 - agent.ComputerAgent - INFO - Computer: type({'text': 'Yann LeCun Google Scholar'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Yann LeCun Google Scholar'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7044/7340 [255:33<10:44, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:52,582 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:41:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 96%|██████████████████████████████████████--| 7045/7340 [255:35<10:42, 27.6 steps/min]\u001b[92m19:41:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:54,575 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:41:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:41:55,282 - agent.ComputerAgent - INFO - Computer: click({'x': 90, 'y': 314, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 90, 'y': 314, 'button': 'left'})\n",
+ "\u001b[92m19:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/reset \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7045/7340 [255:37<10:42, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:41:55,970 - agent.ComputerAgent - INFO - Computer: double_click({'x': 12, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 12, 'y': 524})\n",
+ "\u001b[92m19:41:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:56,628 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 427})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 427})\n",
+ " 96%|██████████████████████████████████████--| 7046/7340 [255:38<10:40, 27.6 steps/min]2025-08-11 19:41:57,310 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:41:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7048/7340 [255:40<10:35, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:41:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:41:59,486 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 991, 'y': 420})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 590, 'scroll_x': 0, 'x': 991, 'y': 420})\n",
+ " 96%|██████████████████████████████████████--| 7048/7340 [255:41<10:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7049/7340 [255:42<10:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:02,043 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:42:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0180c5d2-a012-4261-b093-ed34f443f269/close \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7049/7340 [255:43<10:33, 27.6 steps/min]2025-08-11 19:42:03,400 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:42:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:42:04,040 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7049/7340 [255:47<10:33, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:06,703 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:42:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:42:07,391 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:42:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:09,003 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.68s/it]27.6 steps/min]2025-08-11 19:42:10,023 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:42:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.67s/it]27.6 steps/min]2025-08-11 19:42:11,331 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:42:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7050/7340 [255:53<10:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.69s/it]\u001b[92m19:42:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7050/7340 [255:54<10:31, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]\n",
+ "2025-08-11 19:42:13,916 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ " 96%|██████████████████████████████████████--| 7050/7340 [255:55<10:31, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:15,113 - agent.ComputerAgent - INFO - Computer: click({'x': 183, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 183, 'y': 53})\n",
+ " 96%|██████████████████████████████████████--| 7051/7340 [255:56<10:29, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:15,790 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:16,456 - agent.ComputerAgent - INFO - Computer: click({'x': 660, 'y': 425})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 660, 'y': 425})\n",
+ " 96%|██████████████████████████████████████--| 7052/7340 [255:58<10:27, 27.6 steps/min]\u001b[92m19:42:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:17,130 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 131})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 131})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:18,532 - agent.ComputerAgent - INFO - Computer: click({'x': 568, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 568, 'y': 75})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 96%|██████████████████████████████████████--| 7053/7340 [256:00<10:25, 27.5 steps/min]\u001b[92m19:42:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:19,857 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:42:19,857 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 524})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 524})\n",
+ "2025-08-11 19:42:20,511 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:42:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7055/7340 [256:02<10:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:21,164 - agent.ComputerAgent - INFO - Computer: click({'x': 408, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 408, 'y': 101})\n",
+ " 96%|██████████████████████████████████████--| 7056/7340 [256:03<10:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:23,004 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+V'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+V'})\n",
+ " 96%|██████████████████████████████████████--| 7057/7340 [256:04<10:16, 27.6 steps/min]2025-08-11 19:42:24,142 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7057/7340 [256:06<10:16, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:25,460 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:42:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:26,149 - agent.ComputerAgent - INFO - Computer: click({'x': 471, 'y': 206})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 471, 'y': 206})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7057/7340 [256:07<10:16, 27.6 steps/min]2025-08-11 19:42:26,839 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:42:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:28,195 - agent.ComputerAgent - INFO - Computer: type({'text': \"id charles || echo 'charles not found'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"id charles || echo 'charles not found'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7058/7340 [256:11<10:14, 27.6 steps/min]\u001b[92m19:42:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:30,212 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:42:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:31,495 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "2025-08-11 19:42:32,181 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:42:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7059/7340 [256:13<10:11, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:32,866 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ "2025-08-11 19:42:33,540 - agent.ComputerAgent - INFO - Computer: click({'x': 373, 'y': 77})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 373, 'y': 77})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/reset \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7060/7340 [256:15<10:09, 27.6 steps/min]2025-08-11 19:42:34,221 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7062/7340 [256:16<10:05, 27.6 steps/min]2025-08-11 19:42:34,880 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:42:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7062/7340 [256:17<10:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:42:36,067 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:42:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:42:44,085 - agent.ComputerAgent - INFO - Computer: type({'text': 'sudo apt-get update && sudo apt-get install -y kid3'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'sudo apt-get update && sudo apt-get install -y kid3'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7062/7340 [256:25<10:05, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:44,776 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:42:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:42:45,431 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:42:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7063/7340 [256:28<10:03, 27.5 steps/min]2025-08-11 19:42:47,553 - agent.ComputerAgent - INFO - Computer: click({'x': 186, 'y': 237})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 186, 'y': 237})\n",
+ "2025-08-11 19:42:48,207 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:42:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:42:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:49,516 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:42:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7063/7340 [256:31<10:03, 27.5 steps/min]\u001b[92m19:42:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:42:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:50,862 - agent.ComputerAgent - INFO - Computer: click({'x': 211, 'y': 446})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 211, 'y': 446})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:42:53,559 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:42:53,560 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7064/7340 [256:36<10:01, 27.5 steps/min]\u001b[92m19:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:54,899 - agent.ComputerAgent - INFO - Computer: click({'x': 232, 'y': 211})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 232, 'y': 211})\n",
+ "\u001b[92m19:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:55,590 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 227})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/reset \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7066/7340 [256:37<9:57, 27.5 steps/min]\u001b[92m19:42:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:56,244 - agent.ComputerAgent - INFO - Computer: click({'x': 847, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 847, 'y': 186})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:56,898 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:42:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7068/7340 [256:38<9:52, 27.5 steps/min]2025-08-11 19:42:57,585 - agent.ComputerAgent - INFO - Computer: click({'x': 643, 'y': 424})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 643, 'y': 424})\n",
+ "2025-08-11 19:42:58,241 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 194, 'y': 146}, {'x': 283, 'y': 146}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 194, 'y': 146}, {'x': 283, 'y': 146}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:42:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7069/7340 [256:40<9:50, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:42:59,548 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:42:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:00,234 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 314, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 314, 'button': 'left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7071/7340 [256:41<9:45, 27.5 steps/min]2025-08-11 19:43:00,858 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:43:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:01,536 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:43:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7072/7340 [256:43<9:43, 27.5 steps/min]2025-08-11 19:43:02,206 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:43:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:02,869 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:43:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7072/7340 [256:45<9:43, 27.5 steps/min]\u001b[92m19:43:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:04,240 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:43:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:05,290 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:43:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7072/7340 [256:47<9:43, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:05,977 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:06,983 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:43:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:08,330 - agent.ComputerAgent - INFO - Computer: type({'text': '>Background Cover'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '>Background Cover'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7072/7340 [256:50<9:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:08,956 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:43:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:10,282 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:12,251 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:43:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7074/7340 [256:54<9:39, 27.5 steps/min]2025-08-11 19:43:13,540 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:43:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:14,917 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "2025-08-11 19:43:15,565 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:43:15,566 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 166})\n",
+ "\u001b[92m19:43:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7076/7340 [256:57<9:35, 27.5 steps/min]2025-08-11 19:43:16,239 - agent.ComputerAgent - INFO - Computer: click({'x': 985, 'y': 759})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 985, 'y': 759})\n",
+ "2025-08-11 19:43:16,881 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:43:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7078/7340 [256:58<9:30, 27.5 steps/min]2025-08-11 19:43:17,557 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:43:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:18,888 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:43:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 96%|██████████████████████████████████████--| 7079/7340 [257:00<9:28, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:20,265 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 96%|██████████████████████████████████████--| 7079/7340 [257:02<9:28, 27.5 steps/min]\u001b[92m19:43:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:21,621 - agent.ComputerAgent - INFO - Computer: scroll({'scroll_y': 588, 'scroll_x': 0, 'x': 991, 'y': 433})\n",
+ "INFO:agent.ComputerAgent:Computer: scroll({'scroll_y': 588, 'scroll_x': 0, 'x': 991, 'y': 433})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:23,319 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:43:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ " 96%|██████████████████████████████████████--| 7079/7340 [257:05<9:28, 27.5 steps/min]\u001b[92m19:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:24,668 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:43:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:25,366 - agent.ComputerAgent - INFO - Computer: click({'x': 341, 'y': 75})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 341, 'y': 75})\n",
+ "\u001b[92m19:43:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:26,376 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:43:26,377 - agent.ComputerAgent - INFO - Computer: click({'x': 48, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 48, 'y': 52})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:27,706 - agent.ComputerAgent - INFO - Computer: click({'x': 213, 'y': 183})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 213, 'y': 183})\n",
+ " 96%|██████████████████████████████████████--| 7080/7340 [257:09<9:26, 27.5 steps/min]2025-08-11 19:43:28,338 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:43:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:29,016 - agent.ComputerAgent - INFO - Computer: click({'x': 666, 'y': 279})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 666, 'y': 279})\n",
+ " 96%|██████████████████████████████████████--| 7083/7340 [257:10<9:19, 27.5 steps/min]2025-08-11 19:43:29,687 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:43:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:30,358 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:43:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7084/7340 [257:12<9:17, 27.5 steps/min]2025-08-11 19:43:31,038 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7084/7340 [257:13<9:17, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:32,348 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:43:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:43:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:33,008 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:43:33,009 - agent.ComputerAgent - INFO - Computer: click({'x': 79, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 79, 'y': 157})\n",
+ " 97%|██████████████████████████████████████--| 7084/7340 [257:14<9:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7085/7340 [257:15<9:15, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:43:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:35,405 - agent.ComputerAgent - INFO - Computer: click({'x': 679, 'y': 563})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 679, 'y': 563})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:36,736 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/0c64a3b4-e9b0-46c1-a580-cdcf62b74e44/close \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7086/7340 [257:19<9:13, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:38,000 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:43:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7087/7340 [257:20<9:11, 27.5 steps/min]2025-08-11 19:43:39,348 - agent.ComputerAgent - INFO - Computer: click({'button': 'left', 'x': 308, 'y': 153})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'left', 'x': 308, 'y': 153})\n",
+ "2025-08-11 19:43:40,018 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:40,687 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:43:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:43:41,369 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:43:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7087/7340 [257:23<9:11, 27.5 steps/min]\u001b[92m19:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:43,783 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:43:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:43:44,435 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:43:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:43:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:05, 1.73s/it]2025-08-11 19:43:45,988 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:43:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.68s/it]2025-08-11 19:43:48,556 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:04<00:01, 1.64s/it]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.38s/it]\n",
+ "2025-08-11 19:43:50,638 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+a'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+a'})\n",
+ " 97%|██████████████████████████████████████--| 7088/7340 [257:32<9:09, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:51,527 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:43:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7089/7340 [257:33<9:07, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:52,894 - agent.ComputerAgent - INFO - Computer: click({'x': 462, 'y': 333, 'button': 'left'})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 462, 'y': 333, 'button': 'left'})\n",
+ "\u001b[92m19:43:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:53,522 - agent.ComputerAgent - INFO - Computer: click({'x': 164, 'y': 744})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 164, 'y': 744})\n",
+ "\u001b[92m19:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7089/7340 [257:35<9:07, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:54,204 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ "\u001b[92m19:43:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:54,883 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 148})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 148})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7091/7340 [257:37<9:02, 27.5 steps/min]\u001b[92m19:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:56,181 - agent.ComputerAgent - INFO - Computer: click({'x': 67, 'y': 324})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 67, 'y': 324})\n",
+ "\u001b[92m19:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:43:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:43:56,855 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 141, 'y': 145}, {'x': 281, 'y': 143}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 141, 'y': 145}, {'x': 281, 'y': 143}]})\n",
+ "2025-08-11 19:43:57,467 - agent.ComputerAgent - INFO - Computer: click({'x': 562, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 562, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:43:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7093/7340 [257:39<8:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:43:59,111 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:43:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:43:59,801 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:43:59,802 - agent.ComputerAgent - INFO - Computer: click({'x': 14, 'y': 538})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 14, 'y': 538})\n",
+ " 97%|██████████████████████████████████████--| 7108/7340 [257:41<8:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:01,149 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:44:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:44:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7109/7340 [257:43<8:22, 27.6 steps/min]\u001b[92m19:44:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:02,476 - agent.ComputerAgent - INFO - Computer: click({'x': 1010, 'y': 64})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1010, 'y': 64})\n",
+ "2025-08-11 19:44:03,151 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7109/7340 [257:44<8:22, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:03,839 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:44:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7110/7340 [257:45<8:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/51c56274-d8ae-4edf-8ff1-b637cd2fff66/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7110/7340 [257:47<8:20, 27.6 steps/min]2025-08-11 19:44:05,983 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 400}, {'x': 991, 'y': 456}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 400}, {'x': 991, 'y': 456}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:06,638 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:44:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:08,057 - agent.ComputerAgent - INFO - Computer: type({'text': 'set -e\\nsudo mkdir -p /home/test1\\nsudo chown root:root /home/test1\\nsudo chmod 755 /home/test1\\n\\n# Create user with no shell access\\nsudo useradd -M -s /usr/sbin/nologin -d / charles || true\\n\\necho \\'charles:Ex@mpleP@55w0rd!\\' | sudo chpasswd\\n\\n# Writable directory inside the jail\\nsudo mkdir -p /home/test1/uploads\\nsudo chown charles:charles /home/test1/uploads\\n\\n# Configure SSH to restrict user to SFTP chroot at /home/test1\\nsudo bash -lc \\'cat >>/etc/ssh/sshd_config <<\"EOF\"\\n\\nMatch User charles\\n ChrootDirectory /home/test1\\n ForceCommand internal-sftp\\n AllowTCPForwarding no\\n X11Forwarding no\\nEOF\\'\\n\\n# Apply changes\\nsudo systemctl restart ssh || sudo systemctl restart sshd\\n\\n# Show resulting setup\\nid charles\\nls -ld /home/test1 /home/test1/uploads'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'set -e\\nsudo mkdir -p /home/test1\\nsudo chown root:root /home/test1\\nsudo chmod 755 /home/test1\\n\\n# Create user with no shell access\\nsudo useradd -M -s /usr/sbin/nologin -d / charles || true\\n\\necho \\'charles:Ex@mpleP@55w0rd!\\' | sudo chpasswd\\n\\n# Writable directory inside the jail\\nsudo mkdir -p /home/test1/uploads\\nsudo chown charles:charles /home/test1/uploads\\n\\n# Configure SSH to restrict user to SFTP chroot at /home/test1\\nsudo bash -lc \\'cat >>/etc/ssh/sshd_config <<\"EOF\"\\n\\nMatch User charles\\n ChrootDirectory /home/test1\\n ForceCommand internal-sftp\\n AllowTCPForwarding no\\n X11Forwarding no\\nEOF\\'\\n\\n# Apply changes\\nsudo systemctl restart ssh || sudo systemctl restart sshd\\n\\n# Show resulting setup\\nid charles\\nls -ld /home/test1 /home/test1/uploads'})\n",
+ " 97%|██████████████████████████████████████--| 7110/7340 [257:49<8:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7112/7340 [257:50<8:15, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "Loading checkpoint shards: 0%| | 0/4 [00:00, ?it/s]2025-08-11 19:44:09,777 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:44:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:10,450 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:44:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 25%|██▌ | 1/4 [00:01<00:04, 1.67s/it]7.6 steps/min]2025-08-11 19:44:11,211 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:44:11,859 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:44:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v1/gyms/OSWorld-Ubuntu \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7112/7340 [257:53<8:16, 27.6 steps/min]2025-08-11 19:44:12,663 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "Loading checkpoint shards: 50%|█████ | 2/4 [00:03<00:03, 1.64s/it]INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:44:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:14,006 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ "Loading checkpoint shards: 75%|███████▌ | 3/4 [00:05<00:01, 1.70s/it]7.6 steps/min]2025-08-11 19:44:14,680 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:44:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Loading checkpoint shards: 100%|██████████| 4/4 [00:05<00:00, 1.40s/it]\n",
+ "\u001b[92m19:44:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7113/7340 [257:58<8:13, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:17,531 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7113/7340 [257:59<8:13, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:18,193 - agent.ComputerAgent - INFO - Computer: click({'x': 128, 'y': 744})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 128, 'y': 744})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:18,848 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 223, 'y': 223}, {'x': 221, 'y': 448}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 223, 'y': 223}, {'x': 221, 'y': 448}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7113/7340 [258:01<8:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:20,842 - agent.ComputerAgent - INFO - Computer: click({'x': 709, 'y': 751})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 709, 'y': 751})\n",
+ "\u001b[92m19:44:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7115/7340 [258:02<8:09, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:21,522 - agent.ComputerAgent - INFO - Computer: click({'x': 79, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 79, 'y': 157})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7116/7340 [258:03<8:07, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:22,871 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:44:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:23,550 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 132})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 132})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:25,922 - agent.ComputerAgent - INFO - Computer: type({'text': 'git clone https://github.com/xlang-ai/instructor-embedding\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'git clone https://github.com/xlang-ai/instructor-embedding\\n'})\n",
+ " 97%|██████████████████████████████████████--| 7117/7340 [258:07<8:05, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:44:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:27,263 - agent.ComputerAgent - INFO - Computer: click({'x': 55, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 55, 'y': 133})\n",
+ "\u001b[92m19:44:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7119/7340 [258:08<8:00, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:27,937 - agent.ComputerAgent - INFO - Computer: click({'x': 147, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 147, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:29,250 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:30,594 - agent.ComputerAgent - INFO - Computer: type({'text': 'refresh'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'refresh'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:31,232 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7120/7340 [258:13<7:58, 27.6 steps/min]2025-08-11 19:44:31,874 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:44:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:32,568 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:44:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7123/7340 [258:14<7:52, 27.6 steps/min]2025-08-11 19:44:33,249 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:44:33,910 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:44:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7123/7340 [258:15<7:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/7f112db6-0b60-4e6c-86f5-0d87dc91f371/close \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7123/7340 [258:17<7:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7123/7340 [258:19<7:52, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:38,440 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:44:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:44:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:39,139 - agent.ComputerAgent - INFO - Computer: click({'x': 849, 'y': 477})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 849, 'y': 477})\n",
+ "\u001b[92m19:44:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:40,517 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7123/7340 [258:22<7:52, 27.6 steps/min]\u001b[92m19:44:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:42,558 - agent.ComputerAgent - INFO - Computer: type({'text': 'echo -n \"/home/user/Data3/lists/secret.docx\"'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'echo -n \"/home/user/Data3/lists/secret.docx\"'})\n",
+ "2025-08-11 19:44:43,220 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:44:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:44:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:45,837 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 97%|██████████████████████████████████████--| 7125/7340 [258:27<7:47, 27.6 steps/min]\u001b[92m19:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:44:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:46,470 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:44:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:44:47,100 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:44:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:44:47,734 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 989, 'y': 453}, {'x': 991, 'y': 343}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 989, 'y': 453}, {'x': 991, 'y': 343}]})\n",
+ "2025-08-11 19:44:48,374 - agent.ComputerAgent - INFO - Computer: click({'x': 720, 'y': 690})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 720, 'y': 690})\n",
+ "\u001b[92m19:44:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7127/7340 [258:30<7:43, 27.6 steps/min]\u001b[92m19:44:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:44:49,078 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:44:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:44:49,757 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 182})\n",
+ "2025-08-11 19:44:50,410 - agent.ComputerAgent - INFO - Computer: double_click({'x': 28, 'y': 528})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 28, 'y': 528})\n",
+ " 97%|██████████████████████████████████████--| 7129/7340 [258:32<7:39, 27.6 steps/min]2025-08-11 19:44:51,581 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:44:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:33<7:34, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:36<7:34, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:55,831 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:44:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:37<7:34, 27.6 steps/min]2025-08-11 19:44:56,491 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:44:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:44:57,126 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:38<7:34, 27.6 steps/min]2025-08-11 19:44:57,825 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:44:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:44:58,500 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:44:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:40<7:34, 27.6 steps/min]2025-08-11 19:44:59,180 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:44:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:00,518 - agent.ComputerAgent - INFO - Computer: wait({})\n",
+ "INFO:agent.ComputerAgent:Computer: wait({})\n",
+ " 97%|██████████████████████████████████████--| 7131/7340 [258:42<7:34, 27.6 steps/min]2025-08-11 19:45:01,181 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:45:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:01,887 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:45:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7132/7340 [258:43<7:32, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:03,917 - agent.ComputerAgent - INFO - Computer: type({'text': 'res.png'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'res.png'})\n",
+ " 97%|██████████████████████████████████████--| 7132/7340 [258:45<7:32, 27.6 steps/min]\u001b[92m19:45:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:04,573 - agent.ComputerAgent - INFO - Computer: click({'x': 457, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 457, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:05,887 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ " 97%|██████████████████████████████████████--| 7133/7340 [258:47<7:30, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:07,220 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7134/7340 [258:49<7:28, 27.6 steps/min]\u001b[92m19:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:45:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:09,084 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 279})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 279})\n",
+ "\u001b[92m19:45:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7134/7340 [258:50<7:28, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:09,762 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 274, 'y': 181}, {'x': 274, 'y': 132}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 274, 'y': 181}, {'x': 274, 'y': 132}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:10,433 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:45:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7135/7340 [258:52<7:26, 27.6 steps/min]\u001b[92m19:45:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:12,137 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/create_environment \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7136/7340 [258:53<7:24, 27.6 steps/min]2025-08-11 19:45:12,819 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ "2025-08-11 19:45:13,469 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:45:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7136/7340 [258:55<7:24, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7137/7340 [258:57<7:21, 27.6 steps/min]\u001b[92m19:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:16,804 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 154})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 154})\n",
+ "\u001b[92m19:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7137/7340 [258:58<7:21, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:17,450 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:45:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:18,148 - agent.ComputerAgent - INFO - Computer: click({'x': 811, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 811, 'y': 182})\n",
+ "2025-08-11 19:45:18,840 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 133})\n",
+ "\u001b[92m19:45:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:20,525 - agent.ComputerAgent - INFO - Computer: type({'text': 'pwd\\nls -la\\n'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'pwd\\nls -la\\n'})\n",
+ " 97%|██████████████████████████████████████--| 7138/7340 [259:02<7:19, 27.6 steps/min]2025-08-11 19:45:21,173 - agent.ComputerAgent - INFO - Computer: click({'x': 136, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 136, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:22,532 - agent.ComputerAgent - INFO - Computer: type({'text': \"id charles || echo 'missing'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"id charles || echo 'missing'\"})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:23,846 - agent.ComputerAgent - INFO - Computer: type({'text': 'kid3 & disown'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'kid3 & disown'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7141/7340 [259:06<7:13, 27.6 steps/min]\u001b[92m19:45:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:25,842 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ "2025-08-11 19:45:26,512 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:45:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:27,899 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 97%|██████████████████████████████████████--| 7144/7340 [259:09<7:06, 27.6 steps/min]2025-08-11 19:45:28,562 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 8})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 8})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7146/7340 [259:10<7:02, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:29,840 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7147/7340 [259:11<6:59, 27.6 steps/min]\u001b[92m19:45:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:30,995 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 347}, {'x': 991, 'y': 223}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 347}, {'x': 991, 'y': 223}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7147/7340 [259:13<7:00, 27.6 steps/min]2025-08-11 19:45:32,322 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7148/7340 [259:14<6:57, 27.6 steps/min]\u001b[92m19:45:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:33,483 - agent.ComputerAgent - INFO - Computer: click({'x': 912, 'y': 580})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 912, 'y': 580})\n",
+ " 97%|██████████████████████████████████████--| 7148/7340 [259:15<6:57, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:34,652 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:45:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7149/7340 [259:16<6:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:35,342 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:45:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:36,040 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:45:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:36,732 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:45:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:37 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ " 97%|██████████████████████████████████████--| 7149/7340 [259:19<6:55, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:38,072 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:45:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:38,775 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:45:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 97%|██████████████████████████████████████--| 7149/7340 [259:20<6:55, 27.6 steps/min]2025-08-11 19:45:39,486 - agent.ComputerAgent - INFO - Computer: click({'x': 671, 'y': 227})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 671, 'y': 227})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:40,146 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:45:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7149/7340 [259:21<6:55, 27.6 steps/min]2025-08-11 19:45:40,792 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:45:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:42,172 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:45:42,173 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 97%|██████████████████████████████████████--| 7150/7340 [259:23<6:53, 27.6 steps/min]2025-08-11 19:45:42,819 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:45:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 97%|██████████████████████████████████████--| 7151/7340 [259:25<6:51, 27.6 steps/min]\u001b[92m19:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:44,841 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:45,494 - agent.ComputerAgent - INFO - Computer: click({'x': 615, 'y': 564})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 615, 'y': 564})\n",
+ "2025-08-11 19:45:46,152 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 97%|██████████████████████████████████████--| 7151/7340 [259:27<6:51, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:46,860 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 210, 'y': 177}, {'x': 210, 'y': 446}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 210, 'y': 177}, {'x': 210, 'y': 446}]})\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:45:47,481 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:45:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:48,811 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 97%|██████████████████████████████████████--| 7155/7340 [259:30<6:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:50,079 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:45:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7157/7340 [259:31<6:38, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:50,770 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 737})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 737})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:52,060 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+shift+end'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+shift+end'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/acf3037a-4b6c-4ea8-b81c-ffc2e76132e1/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:45:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7157/7340 [259:34<6:38, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:45:53,375 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:45:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:45:54,705 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl++'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl++'})\n",
+ "2025-08-11 19:45:55,354 - agent.ComputerAgent - INFO - Computer: click({'x': 322, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 322, 'y': 152})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7158/7340 [259:37<6:36, 27.6 steps/min]2025-08-11 19:45:56,012 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:45:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:45:57,333 - agent.ComputerAgent - INFO - Agent: The repository has been cloned to /home/user/instructor-embedding.\n",
+ "\n",
+ "Task completed\n",
+ "INFO:agent.ComputerAgent:Agent: The repository has been cloned to /home/user/instructor-embedding.\n",
+ "\n",
+ "Task completed\n",
+ "2025-08-11 19:45:57,972 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 217\n",
+ " - prompt_tokens: 4659\n",
+ " - total_tokens: 4876\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0080\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 217\n",
+ " - prompt_tokens: 4659\n",
+ " - total_tokens: 4876\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 192\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 0\n",
+ " - response_cost: $0.0080\n",
+ "2025-08-11 19:45:58,622 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:45:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:45:59,963 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ENTER'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ENTER'})\n",
+ " 98%|███████████████████████████████████████-| 7160/7340 [259:41<6:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:01,276 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:46:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:01,972 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:46:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7161/7340 [259:43<6:29, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:46:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:03,146 - agent.ComputerAgent - INFO - Computer: click({'x': 968, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 968, 'y': 182})\n",
+ " 98%|███████████████████████████████████████-| 7161/7340 [259:44<6:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7162/7340 [259:46<6:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:05,011 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:46:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:07,051 - agent.ComputerAgent - INFO - Computer: type({'text': '20 cm'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '20 cm'})\n",
+ " 98%|███████████████████████████████████████-| 7162/7340 [259:48<6:27, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:07,669 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:46:07,670 - agent.ComputerAgent - INFO - Computer: click({'x': 115, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 115, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:09,004 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ "\u001b[92m19:46:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7163/7340 [259:51<6:25, 27.6 steps/min]\u001b[92m19:46:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:10,322 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:46:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:10,970 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 186})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 186})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v1/environments/9053f5ae-149f-4a73-a89e-977f3e750435/reset \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 98%|███████████████████████████████████████-| 7165/7340 [259:53<6:20, 27.6 steps/min]\u001b[92m19:46:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:12,713 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:46:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:13,398 - agent.ComputerAgent - INFO - Computer: click({'x': 711, 'y': 334})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 711, 'y': 334})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7181/7340 [259:55<5:45, 27.6 steps/min]2025-08-11 19:46:14,018 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 429})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 429})\n",
+ "2025-08-11 19:46:14,713 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:46:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7182/7340 [259:56<5:43, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:16,004 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/765aa707-cf44-4dd0-8933-2c2b94870afd/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7183/7340 [259:57<5:40, 27.6 steps/min]"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:46:16,691 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:46:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:18,720 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'esc'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'esc'})\n",
+ " 98%|███████████████████████████████████████-| 7183/7340 [260:00<5:40, 27.6 steps/min]\u001b[92m19:46:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:19,407 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 244}, {'x': 991, 'y': 730}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 244}, {'x': 991, 'y': 730}]})\n",
+ "\u001b[92m19:46:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:20,034 - agent.ComputerAgent - INFO - Computer: click({'x': 154, 'y': 51})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 154, 'y': 51})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7184/7340 [260:01<5:38, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:21,341 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 98%|███████████████████████████████████████-| 7186/7340 [260:03<5:34, 27.6 steps/min]2025-08-11 19:46:21,966 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:46:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:22,629 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:46:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7187/7340 [260:04<5:32, 27.6 steps/min]2025-08-11 19:46:23,291 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:46:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:23,963 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:46:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7187/7340 [260:05<5:32, 27.6 steps/min]2025-08-11 19:46:24,643 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:46:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:25,965 - agent.ComputerAgent - INFO - Computer: get_environment({})\n",
+ "INFO:agent.ComputerAgent:Computer: get_environment({})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7187/7340 [260:07<5:32, 27.6 steps/min]2025-08-11 19:46:26,663 - agent.ComputerAgent - INFO - LLM processing started with 1 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 1 messages\n",
+ "\u001b[92m19:46:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7188/7340 [260:09<5:30, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:46:28,013 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7188/7340 [260:10<5:30, 27.6 steps/min]\u001b[92m19:46:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:29,351 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:46:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:30,041 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:46:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7188/7340 [260:11<5:30, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:46:30,722 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:46:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:31,394 - agent.ComputerAgent - INFO - Computer: click({'x': 753, 'y': 322})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 753, 'y': 322})\n",
+ "\u001b[92m19:46:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7188/7340 [260:13<5:30, 27.6 steps/min]2025-08-11 19:46:32,056 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 275, 'y': 152}, {'x': 81, 'y': 155}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 275, 'y': 152}, {'x': 81, 'y': 155}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 98%|███████████████████████████████████████-| 7189/7340 [260:15<5:27, 27.6 steps/min]\u001b[92m19:46:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:34,051 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:34,715 - agent.ComputerAgent - INFO - Agent: Taking a screenshot to see the current computer screen.\n",
+ "INFO:agent.ComputerAgent:Agent: Taking a screenshot to see the current computer screen.\n",
+ "2025-08-11 19:46:34,716 - agent.ComputerAgent - INFO - Computer: click({'x': 266, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 266, 'y': 164})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:46:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7190/7340 [260:17<5:25, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:35,946 - agent.ComputerAgent - INFO - Computer: click({'x': 182, 'y': 110})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 182, 'y': 110})\n",
+ "\u001b[92m19:46:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:36,612 - agent.ComputerAgent - INFO - Computer: click({'x': 996, 'y': 732})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 996, 'y': 732})\n",
+ " 98%|███████████████████████████████████████-| 7193/7340 [260:19<5:19, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 98%|███████████████████████████████████████-| 7193/7340 [260:20<5:19, 27.6 steps/min]\u001b[92m19:46:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:39,437 - agent.ComputerAgent - INFO - Computer: click({'x': 244, 'y': 89})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 244, 'y': 89})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7193/7340 [260:21<5:19, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:41,619 - agent.ComputerAgent - INFO - Computer: click({'x': 922, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 922, 'y': 181})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7194/7340 [260:23<5:17, 27.6 steps/min]2025-08-11 19:46:42,304 - agent.ComputerAgent - INFO - LLM processing started with 6 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 6 messages\n",
+ "\u001b[92m19:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:42,935 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:46:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:43,589 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:46:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:44,263 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:46:44,944 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:46:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7195/7340 [260:26<5:14, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:46,301 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:46:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7195/7340 [260:28<5:14, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:47,001 - agent.ComputerAgent - INFO - Computer: double_click({'x': 213, 'y': 457})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 213, 'y': 457})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:48,346 - agent.ComputerAgent - INFO - Computer: type({'text': \"sudo mkdir -p /home/test1 && sudo chown root:root /home/test1 && sudo chmod 755 /home/test1 && echo 'dir ok'\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"sudo mkdir -p /home/test1 && sudo chown root:root /home/test1 && sudo chmod 755 /home/test1 && echo 'dir ok'\"})\n",
+ " 98%|███████████████████████████████████████-| 7195/7340 [260:30<5:14, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 98%|███████████████████████████████████████-| 7197/7340 [260:31<5:10, 27.6 steps/min]\u001b[92m19:46:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:50,156 - agent.ComputerAgent - INFO - Computer: click({'x': 430, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 430, 'y': 101})\n",
+ " 98%|███████████████████████████████████████-| 7197/7340 [260:32<5:10, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:51,333 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:46:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:52,605 - agent.ComputerAgent - INFO - Computer: type({'text': 'Maturity Date'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Maturity Date'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:46:54,635 - agent.ComputerAgent - INFO - Computer: type({'text': 'kid3-cli -h'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'kid3-cli -h'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7198/7340 [260:36<5:08, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:46:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:57,297 - agent.ComputerAgent - INFO - Computer: type({'text': '20 cm'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '20 cm'})\n",
+ "\u001b[92m19:46:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7200/7340 [260:39<5:04, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:57,971 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 178})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:46:58,633 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:46:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7201/7340 [260:40<5:01, 27.6 steps/min]2025-08-11 19:46:59,277 - agent.ComputerAgent - INFO - Computer: click({'x': 256, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 256, 'y': 155})\n",
+ "\u001b[92m19:46:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:46:59,946 - agent.ComputerAgent - INFO - Computer: click({'x': 118, 'y': 181})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 118, 'y': 181})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7202/7340 [260:42<4:59, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7204/7340 [260:43<4:55, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:47:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:02,935 - agent.ComputerAgent - INFO - Computer: double_click({'x': 184, 'y': 105})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 184, 'y': 105})\n",
+ " 98%|███████████████████████████████████████-| 7204/7340 [260:44<4:55, 27.6 steps/min]\u001b[92m19:47:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:03,599 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 991, 'y': 487}, {'x': 991, 'y': 416}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 991, 'y': 487}, {'x': 991, 'y': 416}]})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7205/7340 [260:46<4:53, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:04,924 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:47:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7206/7340 [260:47<4:50, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:06,274 - agent.ComputerAgent - INFO - Computer: click({'x': 238, 'y': 310})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 238, 'y': 310})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:06,924 - agent.ComputerAgent - INFO - LLM processing started with 8 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 8 messages\n",
+ "\u001b[92m19:47:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7206/7340 [260:48<4:50, 27.6 steps/min]2025-08-11 19:47:07,564 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:47:07 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:08,229 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 157})\n",
+ "2025-08-11 19:47:08,884 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:47:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 98%|███████████████████████████████████████-| 7207/7340 [260:51<4:48, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:10,222 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:10,864 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7208/7340 [260:53<4:46, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:12,222 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:12,865 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:47:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:13,523 - agent.ComputerAgent - INFO - Computer: click({'x': 510, 'y': 449})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 510, 'y': 449})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:14,844 - agent.ComputerAgent - INFO - Computer: type({'text': 'Revenue'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Revenue'})\n",
+ " 98%|███████████████████████████████████████-| 7208/7340 [260:56<4:46, 27.6 steps/min]\u001b[92m19:47:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:16,856 - agent.ComputerAgent - INFO - Computer: click({'x': 145, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 145, 'y': 53})\n",
+ " 98%|███████████████████████████████████████-| 7210/7340 [260:58<4:42, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:18,225 - agent.ComputerAgent - INFO - Computer: click({'x': 210, 'y': 185})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 210, 'y': 185})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:19,584 - agent.ComputerAgent - INFO - Agent: Yes—you can make Thunderbird run filters automatically on subfolders (not just Inbox). Do the following once per account you want this behavior for.\n",
+ "\n",
+ "1) Turn on the hidden preference to run filters on non-Inbox folders\n",
+ "- Thunderbird > Settings > General > scroll to the bottom > Config Editor…\n",
+ "- In the search box, type: applyIncomingFilters\n",
+ "- If mail.server.default.applyIncomingFilters is false or missing, set it to true.\n",
+ "- If you prefer per-account control, also set mail.server.serverN.applyIncomingFilters = true for the specific account (N is the server number for that account; you can set both default and per-account if you like).\n",
+ "\n",
+ "2) Ensure Thunderbird actually checks those subfolders for new mail\n",
+ "- For each subfolder you want filters to run on: right‑click the folder > Properties > check “When getting new messages, always check this folder” > OK.\n",
+ "- Alternatively (account-wide): Account Settings > Server Settings (for the IMAP account) > enable “Check for new messages at startup” and “Check for new messages every X minutes.” In Advanced IMAP settings (or via Config Editor), ensure mail.server.serverN.check_all_folders_for_new = true to have Thunderbird poll all folders.\n",
+ "\n",
+ "3) Make sure your filters are set to run on incoming mail\n",
+ "- Tools (≡ menu) > Message Filters.\n",
+ "- Choose the account, edit each filter you want, and under “Apply filter when,” tick the “Getting New Mail” options (before/after junk classification as appropriate). Save.\n",
+ "\n",
+ "4) Restart Thunderbird\n",
+ "- Close and reopen Thunderbird so the new prefs apply cleanly.\n",
+ "\n",
+ "5) Test\n",
+ "- Deliver a message directly to a subfolder (e.g., by a server-side rule or by sending to an address that your server routes into that folder). When Thunderbird detects new mail in that subfolder, your filters should run automatically.\n",
+ "\n",
+ "Notes and limitations\n",
+ "- IMAP required: This works when new mail arrives directly into that subfolder on the server (IMAP). For POP accounts and Local Folders, Thunderbird only “receives” mail into Inbox; filters won’t auto-trigger simply because a message...\n",
+ "INFO:agent.ComputerAgent:Agent: Yes—you can make Thunderbird run filters automatically on subfolders (not just Inbox). Do the following once per account you want this behavior for.\n",
+ "\n",
+ "1) Turn on the hidden preference to run filters on non-Inbox folders\n",
+ "- Thunderbird > Settings > General > scroll to the bottom > Config Editor…\n",
+ "- In the search box, type: applyIncomingFilters\n",
+ "- If mail.server.default.applyIncomingFilters is false or missing, set it to true.\n",
+ "- If you prefer per-account control, also set mail.server.serverN.applyIncomingFilters = true for the specific account (N is the server number for that account; you can set both default and per-account if you like).\n",
+ "\n",
+ "2) Ensure Thunderbird actually checks those subfolders for new mail\n",
+ "- For each subfolder you want filters to run on: right‑click the folder > Properties > check “When getting new messages, always check this folder” > OK.\n",
+ "- Alternatively (account-wide): Account Settings > Server Settings (for the IMAP account) > enable “Check for new messages at startup” and “Check for new messages every X minutes.” In Advanced IMAP settings (or via Config Editor), ensure mail.server.serverN.check_all_folders_for_new = true to have Thunderbird poll all folders.\n",
+ "\n",
+ "3) Make sure your filters are set to run on incoming mail\n",
+ "- Tools (≡ menu) > Message Filters.\n",
+ "- Choose the account, edit each filter you want, and under “Apply filter when,” tick the “Getting New Mail” options (before/after junk classification as appropriate). Save.\n",
+ "\n",
+ "4) Restart Thunderbird\n",
+ "- Close and reopen Thunderbird so the new prefs apply cleanly.\n",
+ "\n",
+ "5) Test\n",
+ "- Deliver a message directly to a subfolder (e.g., by a server-side rule or by sending to an address that your server routes into that folder). When Thunderbird detects new mail in that subfolder, your filters should run automatically.\n",
+ "\n",
+ "Notes and limitations\n",
+ "- IMAP required: This works when new mail arrives directly into that subfolder on the server (IMAP). For POP accounts and Local Folders, Thunderbird only “receives” mail into Inbox; filters won’t auto-trigger simply because a message...\n",
+ "2025-08-11 19:47:20,255 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 3021\n",
+ " - prompt_tokens: 3980\n",
+ " - total_tokens: 7001\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2496\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0330\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 3021\n",
+ " - prompt_tokens: 3980\n",
+ " - total_tokens: 7001\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 2496\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 1920\n",
+ " - response_cost: $0.0330\n",
+ " 98%|███████████████████████████████████████-| 7212/7340 [261:01<4:37, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:21,621 - agent.ComputerAgent - INFO - Computer: type({'text': ' chrome refresh 2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': ' chrome refresh 2023'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:22,286 - agent.ComputerAgent - INFO - Computer: click({'x': 239, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 239, 'y': 178})\n",
+ "2025-08-11 19:47:22,928 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7213/7340 [261:04<4:35, 27.6 steps/min]2025-08-11 19:47:23,594 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:47:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:24,237 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 98%|███████████████████████████████████████-| 7216/7340 [261:06<4:29, 27.6 steps/min]2025-08-11 19:47:24,914 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:47:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:26,264 - agent.ComputerAgent - INFO - Agent: The Impress slide was exported to PNG using the default settings and saved as “res.png” on the Desktop. Task completed.\n",
+ "INFO:agent.ComputerAgent:Agent: The Impress slide was exported to PNG using the default settings and saved as “res.png” on the Desktop. Task completed.\n",
+ "2025-08-11 19:47:26,906 - agent.ComputerAgent - INFO - Total usage:\n",
+ " - completion_tokens: 162\n",
+ " - prompt_tokens: 8778\n",
+ " - total_tokens: 8940\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6656\n",
+ " - response_cost: $0.0051\n",
+ "INFO:agent.ComputerAgent:Total usage:\n",
+ " - completion_tokens: 162\n",
+ " - prompt_tokens: 8778\n",
+ " - total_tokens: 8940\n",
+ " - completion_tokens_details:\n",
+ " - accepted_prediction_tokens: 0\n",
+ " - audio_tokens: 0\n",
+ " - reasoning_tokens: 128\n",
+ " - rejected_prediction_tokens: 0\n",
+ " - prompt_tokens_details:\n",
+ " - audio_tokens: 0\n",
+ " - cached_tokens: 6656\n",
+ " - response_cost: $0.0051\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/5e73167c-1836-4752-b7e8-57434e5d7875/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7217/7340 [261:09<4:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/25f45afe-ee57-4629-9991-c515438accab/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ " 98%|███████████████████████████████████████-| 7217/7340 [261:10<4:27, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:30,124 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7233/7340 [261:12<3:51, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cd53d966-2507-485c-bbe9-ee55dbbeddd0/close \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:31,452 - agent.ComputerAgent - INFO - LLM processing started with 10 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 10 messages\n",
+ "\u001b[92m19:47:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:32,084 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:47:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7233/7340 [261:13<3:51, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:32,764 - agent.ComputerAgent - INFO - Computer: click({'x': 76, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 76, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:33,390 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:47:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7235/7340 [261:15<3:47, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/6f3b006b-141d-439d-b6cb-eed7bd6483c3/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7245/7340 [261:16<3:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9882ec8e-4618-4be3-802e-bb5c58c9fbbc/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7245/7340 [261:17<3:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7245/7340 [261:18<3:25, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:37,446 - agent.ComputerAgent - INFO - Computer: click({'x': 901, 'y': 579})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 901, 'y': 579})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7245/7340 [261:20<3:25, 27.7 steps/min]\u001b[92m19:47:38 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:39,450 - agent.ComputerAgent - INFO - Computer: click({'x': 1011, 'y': 194})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1011, 'y': 194})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7246/7340 [261:21<3:23, 27.7 steps/min]\u001b[92m19:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:40,815 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 52})\n",
+ "\u001b[92m19:47:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:41,487 - agent.ComputerAgent - INFO - Computer: click({'x': 259, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 259, 'y': 178})\n",
+ "\u001b[92m19:47:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7247/7340 [261:23<3:21, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:42,158 - agent.ComputerAgent - INFO - Computer: click({'x': 893, 'y': 296})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 893, 'y': 296})\n",
+ "2025-08-11 19:47:42,787 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:47:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7250/7340 [261:28<3:14, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:48,590 - agent.ComputerAgent - INFO - LLM processing started with 12 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 12 messages\n",
+ "\u001b[92m19:47:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7250/7340 [261:30<3:14, 27.7 steps/min]\u001b[92m19:47:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:49,265 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:47:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7250/7340 [261:31<3:14, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:50,594 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:47:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:51,304 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:47:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:47:51,990 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 275, 'y': 181}, {'x': 79, 'y': 182}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 275, 'y': 181}, {'x': 79, 'y': 182}]})\n",
+ " 99%|███████████████████████████████████████-| 7250/7340 [261:33<3:14, 27.7 steps/min]\u001b[92m19:47:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:52,662 - agent.ComputerAgent - INFO - Computer: click({'x': 252, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 252, 'y': 230})\n",
+ " 99%|███████████████████████████████████████-| 7252/7340 [261:35<3:10, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:56,131 - agent.ComputerAgent - INFO - Computer: type({'text': '=A2+B2'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': '=A2+B2'})\n",
+ " 99%|███████████████████████████████████████-| 7252/7340 [261:37<3:10, 27.7 steps/min]\u001b[92m19:47:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:47:56,793 - agent.ComputerAgent - INFO - Computer: click({'x': 563, 'y': 101})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 563, 'y': 101})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:47:58,146 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'alt+left'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:47:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7253/7340 [261:40<3:08, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:47:59,467 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:47:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:48:00,135 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:48:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:48:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7254/7340 [261:41<3:06, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:00,837 - agent.ComputerAgent - INFO - Computer: click({'x': 111, 'y': 52})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 111, 'y': 52})\n",
+ " 99%|███████████████████████████████████████-| 7254/7340 [261:42<3:06, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/ba3f752c-7268-49e8-819c-5b471e52bd54/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7255/7340 [261:44<3:04, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7255/7340 [261:45<3:04, 27.7 steps/min]\u001b[92m19:48:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:04,749 - agent.ComputerAgent - INFO - Computer: click({'x': 19, 'y': 45})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 19, 'y': 45})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7255/7340 [261:47<3:04, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:48:06,105 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:48:06,746 - agent.ComputerAgent - INFO - LLM processing started with 14 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 14 messages\n",
+ "\u001b[92m19:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7256/7340 [261:48<3:01, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:48:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:07,447 - agent.ComputerAgent - INFO - Computer: click({'x': 273, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 273, 'y': 90})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7256/7340 [261:49<3:01, 27.7 steps/min]2025-08-11 19:48:08,626 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:48:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7257/7340 [261:51<2:59, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7257/7340 [261:52<2:59, 27.7 steps/min]2025-08-11 19:48:11,495 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:48:11 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7257/7340 [261:53<2:59, 27.7 steps/min]\u001b[92m19:48:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:12,687 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 211, 'y': 178}, {'x': 211, 'y': 473}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 211, 'y': 178}, {'x': 211, 'y': 473}]})\n",
+ " 99%|███████████████████████████████████████-| 7257/7340 [261:54<2:59, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:14,357 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:48:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7258/7340 [261:56<2:57, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7263/7340 [261:57<2:46, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:17,188 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/40c5f987-3d81-47fe-8798-4e45d9755f93/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7263/7340 [261:59<2:46, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:19,057 - agent.ComputerAgent - INFO - Computer: click({'x': 257, 'y': 152})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 257, 'y': 152})\n",
+ " 99%|███████████████████████████████████████-| 7265/7340 [262:03<2:42, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:23,287 - agent.ComputerAgent - INFO - LLM processing started with 16 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 16 messages\n",
+ "\u001b[92m19:48:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:48:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/31367309-0055-409a-a992-edf729fb010c/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7265/7340 [262:05<2:42, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:48:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:48:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7265/7340 [262:07<2:42, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:48:25,935 - agent.ComputerAgent - INFO - Computer: click({'x': 95, 'y': 74})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 95, 'y': 74})\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:48:26,606 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:48:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7266/7340 [262:08<2:40, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:48:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:27,282 - agent.ComputerAgent - INFO - Computer: click({'x': 986, 'y': 133})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 986, 'y': 133})\n",
+ "\u001b[92m19:48:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:27,901 - agent.ComputerAgent - INFO - Computer: click({'x': 528, 'y': 50})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 528, 'y': 50})\n",
+ " 99%|███████████████████████████████████████-| 7267/7340 [262:09<2:38, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/68f513cf-ec21-4216-bab9-84c5bfcfce88/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7269/7340 [262:10<2:33, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:48:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:30,287 - agent.ComputerAgent - INFO - Computer: click({'x': 105, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 105, 'y': 230})\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:14<2:31, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:33,466 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:48:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:15<2:31, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:34,163 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:48:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:48:34,825 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:48:34 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:16<2:31, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:36,005 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:48:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:17<2:31, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:21<2:31, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:22<2:31, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:48:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:48:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:48:43,277 - agent.ComputerAgent - INFO - Computer: click({'x': 66, 'y': 164})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 66, 'y': 164})\n",
+ " 99%|███████████████████████████████████████-| 7270/7340 [262:25<2:31, 27.7 steps/min]\u001b[92m19:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:48:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:48:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:48:44,471 - agent.ComputerAgent - INFO - Computer: click({'x': 248, 'y': 173})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 248, 'y': 173})\n",
+ " 99%|███████████████████████████████████████-| 7271/7340 [262:26<2:29, 27.7 steps/min]\u001b[92m19:48:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:45,143 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 288, 'y': 153}, {'x': 81, 'y': 155}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 288, 'y': 153}, {'x': 81, 'y': 155}]})\n",
+ " 99%|███████████████████████████████████████-| 7273/7340 [262:29<2:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7273/7340 [262:30<2:25, 27.7 steps/min]2025-08-11 19:48:49,336 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:48:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7273/7340 [262:31<2:25, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:50,547 - agent.ComputerAgent - INFO - LLM processing started with 18 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 18 messages\n",
+ "\u001b[92m19:48:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:51,906 - agent.ComputerAgent - INFO - Computer: type({'text': 'webui refresh 2023'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'webui refresh 2023'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7273/7340 [262:33<2:25, 27.7 steps/min]2025-08-11 19:48:52,576 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:48:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:48:53,923 - agent.ComputerAgent - INFO - Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'Thunderbird'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7274/7340 [262:36<2:22, 27.7 steps/min]\u001b[92m19:48:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:48:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:48:56,172 - agent.ComputerAgent - INFO - Computer: double_click({'x': 144, 'y': 167})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 144, 'y': 167})\n",
+ " 99%|███████████████████████████████████████-| 7276/7340 [262:39<2:18, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:48:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7276/7340 [262:40<2:18, 27.7 steps/min]\u001b[92m19:48:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:00,051 - agent.ComputerAgent - INFO - Computer: click({'x': 761, 'y': 229})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 761, 'y': 229})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:00,717 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:49:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7276/7340 [262:42<2:18, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7277/7340 [262:43<2:16, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7277/7340 [262:44<2:16, 27.7 steps/min]2025-08-11 19:49:03,565 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:49:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:04,222 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 237, 'y': 178}, {'x': 258, 'y': 280}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 237, 'y': 178}, {'x': 258, 'y': 280}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7277/7340 [262:45<2:16, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/fa081188-4859-4858-9d33-0f9675111182/close \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:46<2:14, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:06,401 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:49:06 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:48<2:14, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:51<2:14, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:10,616 - agent.ComputerAgent - INFO - LLM processing started with 20 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 20 messages\n",
+ "\u001b[92m19:49:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:52<2:14, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:53<2:14, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:55<2:14, 27.7 steps/min]\u001b[92m19:49:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:49:14,305 - agent.ComputerAgent - INFO - Computer: click({'x': 1008, 'y': 223})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1008, 'y': 223})\n",
+ "\u001b[92m19:49:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:49:14,966 - agent.ComputerAgent - INFO - Computer: click({'x': 75, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 75, 'y': 135})\n",
+ " 99%|███████████████████████████████████████-| 7278/7340 [262:56<2:14, 27.7 steps/min]\u001b[92m19:49:15 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:15,652 - agent.ComputerAgent - INFO - Computer: double_click({'x': 540, 'y': 128})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 540, 'y': 128})\n",
+ " 99%|███████████████████████████████████████-| 7281/7340 [262:58<2:07, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7281/7340 [262:59<2:07, 27.7 steps/min]\u001b[92m19:49:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:18,506 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 157})\n",
+ " 99%|███████████████████████████████████████-| 7281/7340 [263:00<2:07, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:20,217 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:49:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7282/7340 [263:02<2:05, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7282/7340 [263:03<2:05, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:49:23,091 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:49:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7282/7340 [263:04<2:05, 27.7 steps/min]2025-08-11 19:49:23,787 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 178})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:24,438 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:49:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:49:25,775 - agent.ComputerAgent - INFO - Computer: type({'text': \"=VLOOKUP(C2,'Retail Price'.$A$2:$B$200,2,0)*E2*(1-F2)\"})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': \"=VLOOKUP(C2,'Retail Price'.$A$2:$B$200,2,0)*E2*(1-F2)\"})\n",
+ " 99%|███████████████████████████████████████-| 7282/7340 [263:07<2:05, 27.7 steps/min]2025-08-11 19:49:26,452 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:49:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7284/7340 [263:12<2:01, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:31,690 - agent.ComputerAgent - INFO - LLM processing started with 22 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 22 messages\n",
+ "\u001b[92m19:49:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7284/7340 [263:14<2:01, 27.7 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:49:33,032 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:49:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:49:33 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:33,738 - agent.ComputerAgent - INFO - Computer: click({'x': 1004, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 1004, 'y': 60})\n",
+ " 99%|███████████████████████████████████████-| 7285/7340 [263:16<1:59, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7285/7340 [263:17<1:59, 27.7 steps/min]\u001b[92m19:49:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:36,569 - agent.ComputerAgent - INFO - Computer: click({'x': 980, 'y': 60})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 980, 'y': 60})\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:20<1:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:21<1:57, 27.7 steps/min]2025-08-11 19:49:40,273 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:49:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:22<1:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:42,639 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'CTRL+H'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'CTRL+H'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:24<1:57, 27.7 steps/min]2025-08-11 19:49:43,312 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:49:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:25<1:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:49:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:26<1:57, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:27<1:57, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:49:47,535 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "\u001b[92m19:49:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:49:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:48,871 - agent.ComputerAgent - INFO - Computer: screenshot({})\n",
+ "INFO:agent.ComputerAgent:Computer: screenshot({})\n",
+ " 99%|███████████████████████████████████████-| 7286/7340 [263:30<1:57, 27.6 steps/min]2025-08-11 19:49:49,547 - agent.ComputerAgent - INFO - Computer: click({'x': 259, 'y': 180})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 259, 'y': 180})\n",
+ "2025-08-11 19:49:50,246 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 55, 'y': 164}, {'x': 55, 'y': 600}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 55, 'y': 164}, {'x': 55, 'y': 600}]})\n",
+ " 99%|███████████████████████████████████████-| 7291/7340 [263:33<1:46, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/8e75deb1-3c97-408b-8c7d-f4681b322141/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7291/7340 [263:34<1:46, 27.7 steps/min]\u001b[92m19:49:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:49:53,552 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:49:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:49:54,210 - agent.ComputerAgent - INFO - Computer: click({'x': 867, 'y': 296})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 867, 'y': 296})\n",
+ " 99%|███████████████████████████████████████-| 7291/7340 [263:35<1:46, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:55,903 - agent.ComputerAgent - INFO - LLM processing started with 24 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 24 messages\n",
+ "\u001b[92m19:49:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:37<1:44, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:49:56,562 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:49:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:49:57,261 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:49:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:39<1:44, 27.7 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:41<1:44, 27.7 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:42<1:44, 27.7 steps/min]2025-08-11 19:50:00,993 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:50:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:49<1:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:08,913 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ctrl+c'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ctrl+c'})\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:50<1:44, 27.6 steps/min]2025-08-11 19:50:10,074 - agent.ComputerAgent - INFO - LLM processing started with 26 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 26 messages\n",
+ "\u001b[92m19:50:10 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:51<1:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:52<1:44, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:12 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:54<1:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:55<1:44, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:50:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:50:14,024 - agent.ComputerAgent - INFO - Computer: click({'x': 871, 'y': 135})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 871, 'y': 135})\n",
+ "\u001b[92m19:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7292/7340 [263:56<1:44, 27.6 steps/min]\u001b[92m19:50:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:15,204 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 275, 'y': 152}, {'x': 79, 'y': 154}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 275, 'y': 152}, {'x': 79, 'y': 154}]})\n",
+ " 99%|███████████████████████████████████████-| 7293/7340 [263:57<1:42, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:17,610 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ALT+TAB'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ALT+TAB'})\n",
+ " 99%|███████████████████████████████████████-| 7294/7340 [263:59<1:39, 27.6 steps/min]2025-08-11 19:50:18,781 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:50:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7294/7340 [264:00<1:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7294/7340 [264:01<1:39, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:50:21,132 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:50:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7294/7340 [264:02<1:39, 27.6 steps/min]2025-08-11 19:50:22,169 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7294/7340 [264:03<1:39, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:50:22 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:22,830 - agent.ComputerAgent - INFO - Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 120, 'y': 53})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:24,191 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 257, 'y': 153}, {'x': 259, 'y': 281}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 257, 'y': 153}, {'x': 259, 'y': 281}]})\n",
+ " 99%|███████████████████████████████████████-| 7296/7340 [264:08<1:35, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7296/7340 [264:09<1:35, 27.6 steps/min]\u001b[92m19:50:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:29,058 - agent.ComputerAgent - INFO - Computer: click({'x': 749, 'y': 229})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 749, 'y': 229})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7296/7340 [264:10<1:35, 27.6 steps/min]2025-08-11 19:50:29,733 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:50:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:50:30,431 - agent.ComputerAgent - INFO - LLM processing started with 28 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 28 messages\n",
+ "\u001b[92m19:50:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7297/7340 [264:12<1:33, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7297/7340 [264:16<1:33, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7297/7340 [264:17<1:33, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:50:36,275 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:50:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:50:36 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:37,340 - agent.ComputerAgent - INFO - Computer: click({'x': 229, 'y': 91})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 229, 'y': 91})\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:21<1:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:41,280 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ALT+TAB'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ALT+TAB'})\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:23<1:31, 27.6 steps/min]2025-08-11 19:50:42,453 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:50:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:24<1:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:25<1:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:44,654 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:50:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:26<1:31, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:28<1:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:29<1:31, 27.6 steps/min]\u001b[92m19:50:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:48,511 - agent.ComputerAgent - INFO - Computer: click({'x': 243, 'y': 178})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 243, 'y': 178})\n",
+ " 99%|███████████████████████████████████████-| 7298/7340 [264:30<1:31, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7299/7340 [264:31<1:29, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:50:50 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:50,849 - agent.ComputerAgent - INFO - Computer: click({'x': 138, 'y': 90})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 138, 'y': 90})\n",
+ " 99%|███████████████████████████████████████-| 7299/7340 [264:32<1:29, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ " 99%|███████████████████████████████████████-| 7300/7340 [264:33<1:26, 27.6 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:50:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7300/7340 [264:34<1:26, 27.6 steps/min]\u001b[92m19:50:53 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:53,680 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 749, 'y': 183}, {'x': 837, 'y': 244}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 749, 'y': 183}, {'x': 837, 'y': 244}]})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:50:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ " 99%|███████████████████████████████████████-| 7300/7340 [264:36<1:26, 27.6 steps/min]\u001b[92m19:50:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:50:55,684 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:50:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:50:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:50:56,394 - agent.ComputerAgent - INFO - Computer: click({'x': 258, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 258, 'y': 155})\n",
+ " 99%|███████████████████████████████████████-| 7301/7340 [264:38<1:24, 27.6 steps/min]\u001b[92m19:50:56 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:50:57,053 - agent.ComputerAgent - INFO - Computer: click({'button': 'right', 'x': 118, 'y': 182})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'button': 'right', 'x': 118, 'y': 182})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:50:57,705 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:50:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7302/7340 [264:39<1:22, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7303/7340 [264:42<1:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:01,944 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:51:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7303/7340 [264:43<1:20, 27.6 steps/min]2025-08-11 19:51:02,644 - agent.ComputerAgent - INFO - LLM processing started with 30 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 30 messages\n",
+ "\u001b[92m19:51:02 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7303/7340 [264:44<1:20, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:04,313 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:51:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ " 99%|███████████████████████████████████████-| 7303/7340 [264:46<1:20, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ " 99%|███████████████████████████████████████-| 7303/7340 [264:47<1:20, 27.6 steps/min]\u001b[92m19:51:05 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:06,170 - agent.ComputerAgent - INFO - Computer: click({'x': 285, 'y': 98})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 285, 'y': 98})\n",
+ "100%|███████████████████████████████████████-| 7304/7340 [264:53<1:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:12,936 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:51:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7304/7340 [264:54<1:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7304/7340 [264:57<1:18, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:51:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7304/7340 [264:59<1:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:17 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:51:18,032 - agent.ComputerAgent - INFO - Computer: click({'x': 259, 'y': 180})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 259, 'y': 180})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7304/7340 [265:00<1:18, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:51:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:51:19,864 - agent.ComputerAgent - INFO - Computer: click({'x': 151, 'y': 232})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 151, 'y': 232})\n",
+ "100%|███████████████████████████████████████-| 7305/7340 [265:01<1:16, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:51:21,195 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 298, 'y': 152}, {'x': 81, 'y': 155}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 298, 'y': 152}, {'x': 81, 'y': 155}]})\n",
+ "100%|███████████████████████████████████████-| 7306/7340 [265:02<1:14, 27.6 steps/min]\u001b[92m19:51:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:21,882 - agent.ComputerAgent - INFO - Computer: click({'x': 59, 'y': 157})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 59, 'y': 157})\n",
+ "100%|███████████████████████████████████████-| 7307/7340 [265:03<1:11, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7308/7340 [265:04<1:09, 27.6 steps/min]2025-08-11 19:51:23,543 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:51:23 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7308/7340 [265:06<1:09, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7308/7340 [265:07<1:09, 27.6 steps/min]\u001b[92m19:51:26 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:26,961 - agent.ComputerAgent - INFO - Computer: click({'x': 762, 'y': 230})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 762, 'y': 230})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:27,615 - agent.ComputerAgent - INFO - LLM processing started with 32 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 32 messages\n",
+ "\u001b[92m19:51:27 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7308/7340 [265:09<1:09, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:28,303 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:51:28 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:51:28,994 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:51:29 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:15<1:07, 27.6 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:35,225 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:51:35 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:17<1:07, 27.6 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:18<1:07, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:38,065 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'ALT+TAB'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'ALT+TAB'})\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:19<1:07, 27.5 steps/min]2025-08-11 19:51:39,205 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:51:39 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:21<1:07, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:22<1:07, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7309/7340 [265:23<1:07, 27.5 steps/min]\u001b[92m19:51:41 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:42,081 - agent.ComputerAgent - INFO - Computer: click({'x': 237, 'y': 95})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 237, 'y': 95})\n",
+ "100%|███████████████████████████████████████-| 7310/7340 [265:25<1:05, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7310/7340 [265:26<1:05, 27.5 steps/min]\u001b[92m19:51:44 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7310/7340 [265:27<1:05, 27.5 steps/min]\u001b[92m19:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:51:45 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:51:46 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:46,954 - agent.ComputerAgent - INFO - Computer: drag({'start_element_description': 'Cell C2', 'end_element_description': 'Cell C10', 'x': 268, 'y': 188})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'start_element_description': 'Cell C2', 'end_element_description': 'Cell C10', 'x': 268, 'y': 188})\n",
+ "100%|███████████████████████████████████████-| 7311/7340 [265:29<1:03, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:51:48,615 - agent.ComputerAgent - INFO - LLM processing started with 35 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 35 messages\n",
+ "\u001b[92m19:51:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7311/7340 [265:30<1:03, 27.5 steps/min]2025-08-11 19:51:49,299 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:51:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7311/7340 [265:31<1:03, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7311/7340 [265:32<1:03, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "\u001b[92m19:51:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:52,164 - agent.ComputerAgent - INFO - Computer: click({'x': 87, 'y': 166})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 87, 'y': 166})\n",
+ "100%|███████████████████████████████████████-| 7312/7340 [265:33<1:01, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:52,805 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:51:52 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7313/7340 [265:35<0:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:54 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7313/7340 [265:36<0:58, 27.5 steps/min]\u001b[92m19:51:55 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:51:55,701 - agent.ComputerAgent - INFO - Computer: double_click({'x': 841, 'y': 244})\n",
+ "INFO:agent.ComputerAgent:Computer: double_click({'x': 841, 'y': 244})\n",
+ "100%|███████████████████████████████████████-| 7313/7340 [265:37<0:58, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7315/7340 [265:38<0:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:51:57,894 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:51:57 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "2025-08-11 19:51:58,566 - agent.ComputerAgent - INFO - LLM processing started with 34 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 34 messages\n",
+ "\u001b[92m19:51:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7315/7340 [265:40<0:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:51:59 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7315/7340 [265:41<0:54, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:52:00 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:00,770 - agent.ComputerAgent - INFO - Computer: click({'x': 422, 'y': 360})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 422, 'y': 360})\n",
+ "100%|███████████████████████████████████████-| 7315/7340 [265:42<0:54, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:01,926 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:52:01 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7316/7340 [265:43<0:52, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7317/7340 [265:44<0:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:03 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7317/7340 [265:45<0:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:04,815 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:52:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:52:04 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:05,539 - agent.ComputerAgent - INFO - Computer: click({'x': 15, 'y': 526})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 15, 'y': 526})\n",
+ "100%|███████████████████████████████████████-| 7317/7340 [265:47<0:50, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:07,328 - agent.ComputerAgent - INFO - Computer: type({'text': 'contacts.csv'})\n",
+ "INFO:agent.ComputerAgent:Computer: type({'text': 'contacts.csv'})\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7319/7340 [265:49<0:45, 27.5 steps/min]2025-08-11 19:52:07,988 - agent.ComputerAgent - INFO - LLM processing started with 36 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 36 messages\n",
+ "\u001b[92m19:52:08 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7320/7340 [265:50<0:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:09,185 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:52:09 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7320/7340 [265:53<0:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7320/7340 [265:54<0:43, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7321/7340 [265:55<0:41, 27.5 steps/min]2025-08-11 19:52:13,898 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:52:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7321/7340 [265:58<0:41, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7321/7340 [265:59<0:41, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:52:18 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/9053f5ae-149f-4a73-a89e-977f3e750435/close \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7325/7340 [266:00<0:32, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/26dc2412-0699-4a4e-a272-dc576348a5c8/close \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:52:19 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7325/7340 [266:01<0:32, 27.5 steps/min]\u001b[92m19:52:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:20,887 - agent.ComputerAgent - INFO - Computer: drag({'start_element_description': 'column header A', 'end_element_description': 'column header H', 'x': 90, 'y': 155})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'start_element_description': 'column header A', 'end_element_description': 'column header H', 'x': 90, 'y': 155})\n",
+ "100%|███████████████████████████████████████-| 7326/7340 [266:02<0:30, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:23,480 - agent.ComputerAgent - INFO - Computer: keypress({'keys': 'enter'})\n",
+ "INFO:agent.ComputerAgent:Computer: keypress({'keys': 'enter'})\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7326/7340 [266:05<0:30, 27.5 steps/min]\u001b[92m19:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "No screenshot found, taking screenshot\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2025-08-11 19:52:24,777 - agent.ComputerAgent - INFO - LLM processing started with 37 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 37 messages\n",
+ "\u001b[92m19:52:24 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7327/7340 [266:06<0:28, 27.5 steps/min]\u001b[92m19:52:25 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:26,307 - agent.ComputerAgent - INFO - Computer: click({'x': 828, 'y': 35})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 828, 'y': 35})\n",
+ "100%|███████████████████████████████████████-| 7328/7340 [266:10<0:26, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7329/7340 [266:11<0:23, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:30,881 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:52:30 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7329/7340 [266:12<0:23, 27.5 steps/min]2025-08-11 19:52:31,557 - agent.ComputerAgent - INFO - LLM processing started with 39 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 39 messages\n",
+ "\u001b[92m19:52:31 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7329/7340 [266:13<0:23, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:32,766 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:52:32 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7329/7340 [266:19<0:23, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7330/7340 [266:20<0:21, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7330/7340 [266:21<0:21, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:52:40,663 - agent.ComputerAgent - INFO - LLM processing started with 41 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 41 messages\n",
+ "\u001b[92m19:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:52:40 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:41,739 - agent.ComputerAgent - INFO - Computer: click({'x': 328, 'y': 286})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 328, 'y': 286})\n",
+ "100%|███████████████████████████████████████-| 7331/7340 [266:26<0:19, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "100%|███████████████████████████████████████-| 7332/7340 [266:27<0:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:46,968 - agent.ComputerAgent - INFO - LLM processing started with 43 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 43 messages\n",
+ "\u001b[92m19:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7332/7340 [266:29<0:17, 27.5 steps/min]\u001b[92m19:52:47 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "2025-08-11 19:52:48,348 - agent.ComputerAgent - INFO - LLM processing started with 38 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 38 messages\n",
+ "\u001b[92m19:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "\u001b[92m19:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "\u001b[92m19:52:48 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7332/7340 [266:30<0:17, 27.5 steps/min]\u001b[92m19:52:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:49,902 - agent.ComputerAgent - INFO - Computer: drag({'path': [{'x': 749, 'y': 229}, {'x': 749, 'y': 732}]})\n",
+ "INFO:agent.ComputerAgent:Computer: drag({'path': [{'x': 749, 'y': 229}, {'x': 749, 'y': 732}]})\n",
+ "100%|███████████████████████████████████████-| 7332/7340 [266:31<0:17, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 400 Bad Request\"\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7334/7340 [266:32<0:13, 27.5 steps/min]Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:52:51 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:52:52,289 - agent.ComputerAgent - INFO - Computer: click({'x': 17, 'y': 386})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 17, 'y': 386})\n",
+ "100%|███████████████████████████████████████-| 7334/7340 [266:34<0:13, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:37<0:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:38<0:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/invoke \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b14fe395-5fa2-43f0-9d0b-23c42f3e9093/close \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:39<0:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:52:58,539 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:52:58 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:40<0:10, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:41<0:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/cb64a220-43d8-4373-bd2a-e73bacb4a122/close \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:54<0:10, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:53:13 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7335/7340 [266:55<0:10, 27.5 steps/min]\u001b[92m19:53:14 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:53:14,687 - agent.ComputerAgent - INFO - Computer: click({'x': 318, 'y': 306})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 318, 'y': 306})\n",
+ "100%|███████████████████████████████████████-| 7336/7340 [266:57<0:08, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7336/7340 [266:58<0:08, 27.5 steps/min]\u001b[92m19:53:16 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:53:17,018 - agent.ComputerAgent - INFO - Computer: click({'x': 49, 'y': 53})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 49, 'y': 53})\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:01<0:06, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "2025-08-11 19:53:20,724 - agent.ComputerAgent - INFO - LLM processing started with 40 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 40 messages\n",
+ "\u001b[92m19:53:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:02<0:06, 27.5 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:03<0:06, 27.5 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:23<0:06, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:53:42 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:24<0:06, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/invoke \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:53:43 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:53:43,990 - agent.ComputerAgent - INFO - Computer: click({'x': 432, 'y': 314})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 432, 'y': 314})\n",
+ "100%|███████████████████████████████████████-| 7337/7340 [267:25<0:06, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/b4eee866-c191-4acf-b232-9b18a3c888ef/close \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7338/7340 [267:29<0:04, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7338/7340 [267:30<0:04, 27.4 steps/min]2025-08-11 19:53:49,710 - agent.ComputerAgent - INFO - LLM processing started with 42 messages\n",
+ "INFO:agent.ComputerAgent:LLM processing started with 42 messages\n",
+ "\u001b[92m19:53:49 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= gpt-5; provider = openai\n",
+ "100%|███████████████████████████████████████-| 7338/7340 [268:00<0:04, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
+ "\u001b[92m19:54:20 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "100%|███████████████████████████████████████-| 7338/7340 [268:02<0:04, 27.4 steps/min]INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.\n",
+ "\u001b[92m19:54:21 - LiteLLM:INFO\u001b[0m: utils.py:3258 - \n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "INFO:LiteLLM:\n",
+ "LiteLLM completion() model= HelloKKMe/GTA1-7B; provider = huggingface-local\n",
+ "2025-08-11 19:54:21,613 - agent.ComputerAgent - INFO - Computer: click({'x': 469, 'y': 487})\n",
+ "INFO:agent.ComputerAgent:Computer: click({'x': 469, 'y': 487})\n",
+ "100%|███████████████████████████████████████-| 7339/7340 [268:08<0:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|███████████████████████████████████████-| 7339/7340 [268:12<0:02, 27.4 steps/min]INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/invoke \"HTTP/1.1 200 OK\"\n",
+ "100%|████████████████████████████████████████| 7340/7340 [268:13<0:00, 27.4 steps/min]\n",
+ "INFO:httpx:HTTP Request: POST https://orchestration.hud.so/hud-gym/api/v2/environments/d71be89e-00e2-40e7-8b8d-38e36bc6d26c/close \"HTTP/1.1 200 OK\"\n",
+ "INFO:httpx:HTTP Request: GET https://orchestration.hud.so/hud-gym/api/v2/jobs/a2c1347a-2925-45ed-b86a-6b475b0dc4eb/trajectories \"HTTP/1.1 200 OK\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'task_count': 360, 'avg_reward': 0.21677517254432735, 'success_rate': 18.333333333333332}\n",
+ "View results at: https://app.hud.so/jobs/a2c1347a-2925-45ed-b86a-6b475b0dc4eb\n"
+ ]
+ }
+ ],
+ "source": [
+ "from agent.integrations.hud import run_job\n",
+ "from hud import load_taskset\n",
+ "from hud.taskset import TaskSet\n",
+ "import logging\n",
+ "import uuid\n",
+ "\n",
+ "# Load taskset\n",
+ "taskset = await load_taskset(\"OSWorld-Verified\")\n",
+ "# taskset = TaskSet(tasks=taskset[:20]) # limit to 10 tasks instead of all 370\n",
+ "\n",
+ "job_name = \"osworld-gta-gpt5\"\n",
+ "job_name = f\"{job_name}-{str(uuid.uuid4())[:4]}\"\n",
+ "\n",
+ "# Run benchmark job\n",
+ "job = await run_job(\n",
+ " # model=\"openai/computer-use-preview\",\n",
+ " model=\"huggingface-local/HelloKKMe/GTA1-7B+openai/gpt-5\",\n",
+ " task_or_taskset=taskset,\n",
+ " job_name=job_name,\n",
+ " max_concurrent_tasks=20,\n",
+ " # add any extra ComputerAgent kwargs:\n",
+ " verbosity=logging.INFO, # Enable logging\n",
+ " trajectory_dir=f\"trajectories/{job_name}\" # Save trajectories locally\n",
+ ")\n",
+ "\n",
+ "# Get results OR view them at app.hud.so\n",
+ "print(await job.get_analytics())\n",
+ "print(f\"View results at: https://app.hud.so/jobs/{job.id}\")"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "cua",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/scripts/playground.sh b/scripts/playground.sh
index 39710e4c..0cde5a25 100755
--- a/scripts/playground.sh
+++ b/scripts/playground.sh
@@ -257,7 +257,7 @@ from pathlib import Path
from dotenv import load_dotenv
from computer import Computer
from agent import ComputerAgent, LLM, AgentLoop, LLMProvider
-from agent.ui.gradio.app import create_gradio_ui
+from agent.ui.gradio.ui_components import create_gradio_ui
# Load environment variables from .env.local
load_dotenv(Path(__file__).parent / ".env.local")
@@ -292,7 +292,7 @@ from pathlib import Path
from dotenv import load_dotenv
from computer import Computer
from agent import ComputerAgent, LLM, AgentLoop, LLMProvider
-from agent.ui.gradio.app import create_gradio_ui
+from agent.ui.gradio.ui_components import create_gradio_ui
# Load environment variables from .env.local
load_dotenv(Path(__file__).parent / ".env.local")