add concurrent session management and resource isolation

Implement concurrent session management for MCP server with:

- SessionManager with computer instance pooling
- Concurrent task execution support
- New tools: get_session_stats, cleanup_session
- Graceful shutdown and resource cleanup
- Fix nested asyncio event loop issues
- Add comprehensive tests and documentation

Enables multiple concurrent clients with proper resource isolation
while maintaining backward compatibility.
This commit is contained in:
Adam
2025-10-06 18:37:10 -04:00
parent 671845001c
commit 3274cfafe7
5 changed files with 5688 additions and 109 deletions

View File

@@ -0,0 +1,257 @@
# MCP Server Concurrent Session Management
This document describes the improvements made to the MCP Server to address concurrent session management and resource lifecycle issues.
## Problem Statement
The original MCP server implementation had several critical issues:
1. **Global Computer Instance**: Used a single `global_computer` variable shared across all clients
2. **No Resource Isolation**: Multiple clients would interfere with each other
3. **Sequential Task Processing**: Multi-task operations were always sequential
4. **No Graceful Shutdown**: Server couldn't properly cleanup resources on shutdown
5. **Hidden Event Loop**: `server.run()` hid the event loop, preventing proper lifecycle management
## Solution Architecture
### 1. Session Manager (`session_manager.py`)
The `SessionManager` class provides:
- **Per-session computer instances**: Each client gets isolated computer resources
- **Computer instance pooling**: Efficient reuse of computer instances with lifecycle management
- **Task registration**: Track active tasks per session for graceful cleanup
- **Automatic cleanup**: Background task cleans up idle sessions
- **Resource limits**: Configurable maximum concurrent sessions
#### Key Components:
```python
class SessionManager:
def __init__(self, max_concurrent_sessions: int = 10):
self._sessions: Dict[str, SessionInfo] = {}
self._computer_pool = ComputerPool()
# ... lifecycle management
```
#### Session Lifecycle:
1. **Creation**: New session created when client first connects
2. **Task Registration**: Each task is registered with the session
3. **Activity Tracking**: Last activity time updated on each operation
4. **Cleanup**: Sessions cleaned up when idle or on shutdown
### 2. Computer Pool (`ComputerPool`)
Manages computer instances efficiently:
- **Pool Size Limits**: Maximum number of concurrent computer instances
- **Instance Reuse**: Available instances reused across sessions
- **Lifecycle Management**: Proper startup/shutdown of computer instances
- **Resource Cleanup**: All instances properly closed on shutdown
### 3. Enhanced Server Tools
All server tools now support:
- **Session ID Parameter**: Optional `session_id` for multi-client support
- **Resource Isolation**: Each session gets its own computer instance
- **Task Tracking**: Proper registration/unregistration of tasks
- **Error Handling**: Graceful error handling with session cleanup
#### Updated Tool Signatures:
```python
async def screenshot_cua(ctx: Context, session_id: Optional[str] = None) -> Any:
async def run_cua_task(ctx: Context, task: str, session_id: Optional[str] = None) -> Any:
async def run_multi_cua_tasks(ctx: Context, tasks: List[str], session_id: Optional[str] = None, concurrent: bool = False) -> Any:
```
### 4. Concurrent Task Execution
The `run_multi_cua_tasks` tool now supports:
- **Sequential Mode** (default): Tasks run one after another
- **Concurrent Mode**: Tasks run in parallel using `asyncio.gather()`
- **Progress Tracking**: Proper progress reporting for both modes
- **Error Handling**: Individual task failures don't stop other tasks
### 5. Graceful Shutdown
The server now provides:
- **Signal Handlers**: Proper handling of SIGINT and SIGTERM
- **Session Cleanup**: All active sessions properly cleaned up
- **Resource Release**: Computer instances returned to pool and closed
- **Async Lifecycle**: Event loop properly exposed for cleanup
## Usage Examples
### Basic Usage (Backward Compatible)
```python
# These calls work exactly as before
await screenshot_cua(ctx)
await run_cua_task(ctx, "Open browser")
await run_multi_cua_tasks(ctx, ["Task 1", "Task 2"])
```
### Multi-Client Usage
```python
# Client 1
session_id_1 = "client-1-session"
await screenshot_cua(ctx, session_id_1)
await run_cua_task(ctx, "Open browser", session_id_1)
# Client 2 (completely isolated)
session_id_2 = "client-2-session"
await screenshot_cua(ctx, session_id_2)
await run_cua_task(ctx, "Open editor", session_id_2)
```
### Concurrent Task Execution
```python
# Run tasks concurrently instead of sequentially
tasks = ["Open browser", "Open editor", "Open terminal"]
results = await run_multi_cua_tasks(ctx, tasks, concurrent=True)
```
### Session Management
```python
# Get session statistics
stats = await get_session_stats(ctx)
print(f"Active sessions: {stats['total_sessions']}")
# Cleanup specific session
await cleanup_session(ctx, "session-to-cleanup")
```
## Configuration
### Environment Variables
- `CUA_MODEL_NAME`: Model to use (default: `anthropic/claude-3-5-sonnet-20241022`)
- `CUA_MAX_IMAGES`: Maximum images to keep (default: `3`)
### Session Manager Configuration
```python
# In session_manager.py
class SessionManager:
def __init__(self, max_concurrent_sessions: int = 10):
# Configurable maximum concurrent sessions
class ComputerPool:
def __init__(self, max_size: int = 5, idle_timeout: float = 300.0):
# Configurable pool size and idle timeout
```
## Performance Improvements
### Before (Issues):
- ❌ Single global computer instance
- ❌ Client interference and resource conflicts
- ❌ Sequential task processing only
- ❌ No graceful shutdown
- ❌ 30s timeout issues with long-running tasks
### After (Benefits):
- ✅ Per-session computer instances with proper isolation
- ✅ Computer instance pooling for efficient resource usage
- ✅ Concurrent task execution support
- ✅ Graceful shutdown with proper cleanup
- ✅ Streaming updates prevent timeout issues
- ✅ Configurable resource limits
- ✅ Automatic session cleanup
## Testing
Comprehensive test coverage includes:
- Session creation and reuse
- Concurrent session isolation
- Task registration and cleanup
- Error handling with session management
- Concurrent vs sequential task execution
- Session statistics and cleanup
Run tests with:
```bash
pytest tests/test_mcp_server_session_management.py -v
```
## Migration Guide
### For Existing Clients
No changes required! The new implementation is fully backward compatible:
```python
# This still works exactly as before
await run_cua_task(ctx, "My task")
```
### For New Multi-Client Applications
Use session IDs for proper isolation:
```python
# Create a unique session ID for each client
session_id = str(uuid.uuid4())
await run_cua_task(ctx, "My task", session_id)
```
### For Concurrent Task Execution
Enable concurrent mode for better performance:
```python
tasks = ["Task 1", "Task 2", "Task 3"]
results = await run_multi_cua_tasks(ctx, tasks, concurrent=True)
```
## Monitoring and Debugging
### Session Statistics
```python
stats = await get_session_stats(ctx)
print(f"Total sessions: {stats['total_sessions']}")
print(f"Max concurrent: {stats['max_concurrent']}")
for session_id, session_info in stats['sessions'].items():
print(f"Session {session_id}: {session_info['active_tasks']} active tasks")
```
### Logging
The server provides detailed logging for:
- Session creation and cleanup
- Task registration and completion
- Resource pool usage
- Error conditions and recovery
### Graceful Shutdown
The server properly handles shutdown signals:
```bash
# Send SIGTERM for graceful shutdown
kill -TERM <server_pid>
# Or use Ctrl+C (SIGINT)
```
## Future Enhancements
Potential future improvements:
1. **Session Persistence**: Save/restore session state across restarts
2. **Load Balancing**: Distribute sessions across multiple server instances
3. **Resource Monitoring**: Real-time monitoring of resource usage
4. **Auto-scaling**: Dynamic adjustment of pool size based on demand
5. **Session Timeouts**: Configurable timeouts for different session types

View File

@@ -3,11 +3,14 @@ import base64
import inspect
import logging
import os
import signal
import sys
from tabnanny import verbose
import traceback
import uuid
from typing import Any, Dict, List, Optional, Union, Tuple
import anyio
# Configure logging to output to stderr for debug visibility
logging.basicConfig(
level=logging.DEBUG, # Changed to DEBUG
@@ -40,8 +43,13 @@ except ImportError as e:
traceback.print_exc(file=sys.stderr)
sys.exit(1)
# Global computer instance for reuse
global_computer = None
try:
from .session_manager import get_session_manager, initialize_session_manager, shutdown_session_manager
logger.debug("Successfully imported session manager")
except ImportError as e:
logger.error(f"Failed to import session manager: {e}")
traceback.print_exc(file=sys.stderr)
sys.exit(1)
def get_env_bool(key: str, default: bool = False) -> bool:
"""Get boolean value from environment variable."""
@@ -97,126 +105,141 @@ def serve() -> FastMCP:
server = FastMCP(name="cua-agent")
@server.tool(structured_output=False)
async def screenshot_cua(ctx: Context) -> Any:
async def screenshot_cua(ctx: Context, session_id: Optional[str] = None) -> Any:
"""
Take a screenshot of the current MacOS VM screen and return the image.
Args:
session_id: Optional session ID for multi-client support. If not provided, a new session will be created.
"""
global global_computer
if global_computer is None:
global_computer = Computer(verbosity=logging.INFO)
await global_computer.run()
screenshot = await global_computer.interface.screenshot()
# Returning Image object is fine when structured_output=False
return Image(format="png", data=screenshot)
session_manager = get_session_manager()
async with session_manager.get_session(session_id) as session:
screenshot = await session.computer.interface.screenshot()
# Returning Image object is fine when structured_output=False
return Image(format="png", data=screenshot)
@server.tool(structured_output=False)
async def run_cua_task(ctx: Context, task: str) -> Any:
async def run_cua_task(ctx: Context, task: str, session_id: Optional[str] = None) -> Any:
"""
Run a Computer-Use Agent (CUA) task in a MacOS VM and return (combined text, final screenshot).
Args:
task: The task description for the agent to execute
session_id: Optional session ID for multi-client support. If not provided, a new session will be created.
"""
global global_computer
session_manager = get_session_manager()
task_id = str(uuid.uuid4())
try:
logger.info(f"Starting CUA task: {task}")
logger.info(f"Starting CUA task: {task} (task_id: {task_id})")
# Initialize computer if needed
if global_computer is None:
global_computer = Computer(verbosity=logging.INFO)
await global_computer.run()
async with session_manager.get_session(session_id) as session:
# Register this task with the session
await session_manager.register_task(session.session_id, task_id)
try:
# Get model name
model_name = os.getenv("CUA_MODEL_NAME", "anthropic/claude-3-5-sonnet-20241022")
logger.info(f"Using model: {model_name}")
# Get model name
model_name = os.getenv("CUA_MODEL_NAME", "anthropic/claude-3-5-sonnet-20241022")
logger.info(f"Using model: {model_name}")
# Create agent with the new v0.4.x API
agent = ComputerAgent(
model=model_name,
only_n_most_recent_images=int(os.getenv("CUA_MAX_IMAGES", "3")),
verbosity=logging.INFO,
tools=[session.computer],
)
# Create agent with the new v0.4.x API
agent = ComputerAgent(
model=model_name,
only_n_most_recent_images=int(os.getenv("CUA_MAX_IMAGES", "3")),
verbosity=logging.INFO,
tools=[global_computer],
)
messages = [{"role": "user", "content": task}]
messages = [{"role": "user", "content": task}]
# Collect all results
aggregated_messages: List[str] = []
async for result in agent.run(messages):
logger.info("Agent processing step")
ctx.info("Agent processing step")
# Collect all results
aggregated_messages: List[str] = []
async for result in agent.run(messages):
logger.info("Agent processing step")
ctx.info("Agent processing step")
outputs = result.get("output", [])
for output in outputs:
output_type = output.get("type")
outputs = result.get("output", [])
for output in outputs:
output_type = output.get("type")
if output_type == "message":
logger.debug("Streaming assistant message: %s", output)
content = _normalise_message_content(output.get("content"))
aggregated_text = _extract_text_from_content(content)
if aggregated_text:
aggregated_messages.append(aggregated_text)
await _maybe_call_ctx_method(
ctx,
"yield_message",
role=output.get("role", "assistant"),
content=content,
)
if output_type == "message":
logger.debug("Streaming assistant message: %s", output)
content = _normalise_message_content(output.get("content"))
aggregated_text = _extract_text_from_content(content)
if aggregated_text:
aggregated_messages.append(aggregated_text)
await _maybe_call_ctx_method(
ctx,
"yield_message",
role=output.get("role", "assistant"),
content=content,
)
elif output_type in {"tool_use", "computer_call", "function_call"}:
logger.debug("Streaming tool call: %s", output)
call_id = output.get("id") or output.get("call_id")
tool_name = output.get("name") or output.get("action", {}).get("type")
tool_input = output.get("input") or output.get("arguments") or output.get("action")
if call_id:
await _maybe_call_ctx_method(
ctx,
"yield_tool_call",
name=tool_name,
call_id=call_id,
input=tool_input,
)
elif output_type in {"tool_use", "computer_call", "function_call"}:
logger.debug("Streaming tool call: %s", output)
call_id = output.get("id") or output.get("call_id")
tool_name = output.get("name") or output.get("action", {}).get("type")
tool_input = output.get("input") or output.get("arguments") or output.get("action")
if call_id:
await _maybe_call_ctx_method(
ctx,
"yield_tool_call",
name=tool_name,
call_id=call_id,
input=tool_input,
)
elif output_type in {"tool_result", "computer_call_output", "function_call_output"}:
logger.debug("Streaming tool output: %s", output)
call_id = output.get("call_id") or output.get("id")
content = output.get("content") or output.get("output")
aggregated_text = _serialise_tool_content(content)
if aggregated_text:
aggregated_messages.append(aggregated_text)
if call_id:
await _maybe_call_ctx_method(
ctx,
"yield_tool_output",
call_id=call_id,
output=content,
is_error=output.get("status") == "failed" or output.get("is_error", False),
)
elif output_type in {"tool_result", "computer_call_output", "function_call_output"}:
logger.debug("Streaming tool output: %s", output)
call_id = output.get("call_id") or output.get("id")
content = output.get("content") or output.get("output")
aggregated_text = _serialise_tool_content(content)
if aggregated_text:
aggregated_messages.append(aggregated_text)
if call_id:
await _maybe_call_ctx_method(
ctx,
"yield_tool_output",
call_id=call_id,
output=content,
is_error=output.get("status") == "failed" or output.get("is_error", False),
)
logger.info("CUA task completed successfully")
ctx.info("CUA task completed successfully")
logger.info("CUA task completed successfully")
ctx.info("CUA task completed successfully")
screenshot_image = Image(
format="png",
data=await session.computer.interface.screenshot(),
)
screenshot_image = Image(
format="png",
data=await global_computer.interface.screenshot(),
)
return (
"\n".join(aggregated_messages).strip() or "Task completed with no text output.",
screenshot_image,
)
return (
"\n".join(aggregated_messages).strip() or "Task completed with no text output.",
screenshot_image,
)
finally:
# Unregister the task from the session
await session_manager.unregister_task(session.session_id, task_id)
except Exception as e:
error_msg = f"Error running CUA task: {str(e)}\n{traceback.format_exc()}"
logger.error(error_msg)
ctx.error(error_msg)
# Return tuple with error message and a screenshot if possible
# Try to get a screenshot from the session if available
try:
if global_computer is not None:
screenshot = await global_computer.interface.screenshot()
return (
f"Error during task execution: {str(e)}",
Image(format="png", data=screenshot),
)
if session_id:
async with session_manager.get_session(session_id) as session:
screenshot = await session.computer.interface.screenshot()
return (
f"Error during task execution: {str(e)}",
Image(format="png", data=screenshot),
)
except Exception:
pass
# If we can't get a screenshot, return a placeholder
return (
f"Error during task execution: {str(e)}",
@@ -224,37 +247,148 @@ def serve() -> FastMCP:
)
@server.tool(structured_output=False)
async def run_multi_cua_tasks(ctx: Context, tasks: List[str]) -> Any:
async def run_multi_cua_tasks(ctx: Context, tasks: List[str], session_id: Optional[str] = None, concurrent: bool = False) -> Any:
"""
Run multiple CUA tasks in sequence and return a list of (combined text, screenshot).
Run multiple CUA tasks and return a list of (combined text, screenshot).
Args:
tasks: List of task descriptions to execute
session_id: Optional session ID for multi-client support. If not provided, a new session will be created.
concurrent: If True, run tasks concurrently. If False, run sequentially (default).
"""
total_tasks = len(tasks)
if total_tasks == 0:
ctx.report_progress(1.0)
return []
results: List[Tuple[str, Image]] = []
for i, task in enumerate(tasks):
logger.info(f"Running task {i+1}/{total_tasks}: {task}")
ctx.info(f"Running task {i+1}/{total_tasks}: {task}")
session_manager = get_session_manager()
if concurrent and total_tasks > 1:
# Run tasks concurrently
logger.info(f"Running {total_tasks} tasks concurrently")
ctx.info(f"Running {total_tasks} tasks concurrently")
# Create tasks with progress tracking
async def run_task_with_progress(task_index: int, task: str) -> Tuple[int, Tuple[str, Image]]:
ctx.report_progress(task_index / total_tasks)
result = await run_cua_task(ctx, task, session_id)
ctx.report_progress((task_index + 1) / total_tasks)
return task_index, result
# Create all task coroutines
task_coroutines = [run_task_with_progress(i, task) for i, task in enumerate(tasks)]
# Wait for all tasks to complete
results_with_indices = await asyncio.gather(*task_coroutines, return_exceptions=True)
# Sort results by original task order and handle exceptions
results: List[Tuple[str, Image]] = []
for result in results_with_indices:
if isinstance(result, Exception):
logger.error(f"Task failed with exception: {result}")
ctx.error(f"Task failed: {str(result)}")
results.append((f"Task failed: {str(result)}", Image(format="png", data=b"")))
else:
_, task_result = result
results.append(task_result)
return results
else:
# Run tasks sequentially (original behavior)
logger.info(f"Running {total_tasks} tasks sequentially")
ctx.info(f"Running {total_tasks} tasks sequentially")
results: List[Tuple[str, Image]] = []
for i, task in enumerate(tasks):
logger.info(f"Running task {i+1}/{total_tasks}: {task}")
ctx.info(f"Running task {i+1}/{total_tasks}: {task}")
ctx.report_progress(i / total_tasks)
task_result = await run_cua_task(ctx, task)
results.append(task_result)
ctx.report_progress((i + 1) / total_tasks)
ctx.report_progress(i / total_tasks)
task_result = await run_cua_task(ctx, task, session_id)
results.append(task_result)
ctx.report_progress((i + 1) / total_tasks)
return results
return results
@server.tool(structured_output=False)
async def get_session_stats(ctx: Context) -> Dict[str, Any]:
"""
Get statistics about active sessions and resource usage.
"""
session_manager = get_session_manager()
return session_manager.get_session_stats()
@server.tool(structured_output=False)
async def cleanup_session(ctx: Context, session_id: str) -> str:
"""
Cleanup a specific session and release its resources.
Args:
session_id: The session ID to cleanup
"""
session_manager = get_session_manager()
await session_manager.cleanup_session(session_id)
return f"Session {session_id} cleanup initiated"
return server
server = serve()
def main():
"""Run the MCP server."""
async def run_server():
"""Run the MCP server with proper lifecycle management."""
session_manager = None
try:
logger.debug("Starting MCP server...")
server.run()
# Initialize session manager
session_manager = await initialize_session_manager()
logger.info("Session manager initialized")
# Set up signal handlers for graceful shutdown
def signal_handler(signum, frame):
logger.info(f"Received signal {signum}, initiating graceful shutdown...")
# Create a task to shutdown gracefully
asyncio.create_task(graceful_shutdown())
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
# Start the server
logger.info("Starting FastMCP server...")
# Use run_stdio_async directly instead of server.run() to avoid nested event loops
await server.run_stdio_async()
except Exception as e:
logger.error(f"Error starting server: {e}")
traceback.print_exc(file=sys.stderr)
raise
finally:
# Ensure cleanup happens
if session_manager:
logger.info("Shutting down session manager...")
await shutdown_session_manager()
async def graceful_shutdown():
"""Gracefully shutdown the server and all sessions."""
logger.info("Initiating graceful shutdown...")
try:
await shutdown_session_manager()
logger.info("Graceful shutdown completed")
except Exception as e:
logger.error(f"Error during graceful shutdown: {e}")
finally:
# Exit the process
import os
os._exit(0)
def main():
"""Run the MCP server with proper async lifecycle management."""
try:
# Use anyio.run instead of asyncio.run to avoid nested event loop issues
anyio.run(run_server)
except KeyboardInterrupt:
logger.info("Server interrupted by user")
except Exception as e:
logger.error(f"Error starting server: {e}")
traceback.print_exc(file=sys.stderr)

View File

@@ -0,0 +1,310 @@
"""
Session Manager for MCP Server - Handles concurrent client sessions with proper resource isolation.
This module provides:
- Per-session computer instance management
- Resource pooling and lifecycle management
- Graceful session cleanup
- Concurrent task execution support
"""
import asyncio
import logging
import time
import uuid
from typing import Dict, Optional, Any, List, Set
from dataclasses import dataclass, field
from contextlib import asynccontextmanager
import weakref
logger = logging.getLogger("mcp-server.session_manager")
@dataclass
class SessionInfo:
"""Information about an active session."""
session_id: str
computer: Any # Computer instance
created_at: float
last_activity: float
active_tasks: Set[str] = field(default_factory=set)
is_shutting_down: bool = False
class ComputerPool:
"""Pool of computer instances for efficient resource management."""
def __init__(self, max_size: int = 5, idle_timeout: float = 300.0):
self.max_size = max_size
self.idle_timeout = idle_timeout
self._available: List[Any] = []
self._in_use: Set[Any] = set()
self._creation_lock = asyncio.Lock()
async def acquire(self) -> Any:
"""Acquire a computer instance from the pool."""
# Try to get an available instance
if self._available:
computer = self._available.pop()
self._in_use.add(computer)
logger.debug(f"Reusing computer instance from pool")
return computer
# Check if we can create a new one
async with self._creation_lock:
if len(self._in_use) < self.max_size:
logger.debug("Creating new computer instance")
from computer import Computer
computer = Computer(verbosity=logging.INFO)
await computer.run()
self._in_use.add(computer)
return computer
# Wait for an instance to become available
logger.debug("Waiting for computer instance to become available")
while not self._available:
await asyncio.sleep(0.1)
computer = self._available.pop()
self._in_use.add(computer)
return computer
async def release(self, computer: Any) -> None:
"""Release a computer instance back to the pool."""
if computer in self._in_use:
self._in_use.remove(computer)
self._available.append(computer)
logger.debug("Released computer instance back to pool")
async def cleanup_idle(self) -> None:
"""Clean up idle computer instances."""
current_time = time.time()
idle_instances = []
for computer in self._available[:]:
# Check if computer has been idle too long
# Note: We'd need to track last use time per instance for this
# For now, we'll keep instances in the pool
pass
async def shutdown(self) -> None:
"""Shutdown all computer instances in the pool."""
logger.info("Shutting down computer pool")
# Close all available instances
for computer in self._available:
try:
if hasattr(computer, 'close'):
await computer.close()
elif hasattr(computer, 'stop'):
await computer.stop()
except Exception as e:
logger.warning(f"Error closing computer instance: {e}")
# Close all in-use instances
for computer in self._in_use:
try:
if hasattr(computer, 'close'):
await computer.close()
elif hasattr(computer, 'stop'):
await computer.stop()
except Exception as e:
logger.warning(f"Error closing computer instance: {e}")
self._available.clear()
self._in_use.clear()
class SessionManager:
"""Manages concurrent client sessions with proper resource isolation."""
def __init__(self, max_concurrent_sessions: int = 10):
self.max_concurrent_sessions = max_concurrent_sessions
self._sessions: Dict[str, SessionInfo] = {}
self._computer_pool = ComputerPool()
self._session_lock = asyncio.Lock()
self._cleanup_task: Optional[asyncio.Task] = None
self._shutdown_event = asyncio.Event()
async def start(self) -> None:
"""Start the session manager and cleanup task."""
logger.info("Starting session manager")
self._cleanup_task = asyncio.create_task(self._cleanup_loop())
async def stop(self) -> None:
"""Stop the session manager and cleanup all resources."""
logger.info("Stopping session manager")
self._shutdown_event.set()
if self._cleanup_task:
self._cleanup_task.cancel()
try:
await self._cleanup_task
except asyncio.CancelledError:
pass
# Force cleanup all sessions
async with self._session_lock:
session_ids = list(self._sessions.keys())
for session_id in session_ids:
await self._force_cleanup_session(session_id)
await self._computer_pool.shutdown()
@asynccontextmanager
async def get_session(self, session_id: Optional[str] = None) -> Any:
"""Get or create a session with proper resource management."""
if session_id is None:
session_id = str(uuid.uuid4())
# Check if session exists and is not shutting down
async with self._session_lock:
if session_id in self._sessions:
session = self._sessions[session_id]
if session.is_shutting_down:
raise RuntimeError(f"Session {session_id} is shutting down")
session.last_activity = time.time()
computer = session.computer
else:
# Create new session
if len(self._sessions) >= self.max_concurrent_sessions:
raise RuntimeError(f"Maximum concurrent sessions ({self.max_concurrent_sessions}) reached")
computer = await self._computer_pool.acquire()
session = SessionInfo(
session_id=session_id,
computer=computer,
created_at=time.time(),
last_activity=time.time()
)
self._sessions[session_id] = session
logger.info(f"Created new session: {session_id}")
try:
yield session
finally:
# Update last activity
async with self._session_lock:
if session_id in self._sessions:
self._sessions[session_id].last_activity = time.time()
async def register_task(self, session_id: str, task_id: str) -> None:
"""Register a task for a session."""
async with self._session_lock:
if session_id in self._sessions:
self._sessions[session_id].active_tasks.add(task_id)
logger.debug(f"Registered task {task_id} for session {session_id}")
async def unregister_task(self, session_id: str, task_id: str) -> None:
"""Unregister a task from a session."""
async with self._session_lock:
if session_id in self._sessions:
self._sessions[session_id].active_tasks.discard(task_id)
logger.debug(f"Unregistered task {task_id} from session {session_id}")
async def cleanup_session(self, session_id: str) -> None:
"""Cleanup a specific session."""
async with self._session_lock:
if session_id not in self._sessions:
return
session = self._sessions[session_id]
# Check if session has active tasks
if session.active_tasks:
logger.info(f"Session {session_id} has active tasks, marking for shutdown")
session.is_shutting_down = True
return
# Actually cleanup the session
await self._force_cleanup_session(session_id)
async def _force_cleanup_session(self, session_id: str) -> None:
"""Force cleanup a session regardless of active tasks."""
async with self._session_lock:
if session_id not in self._sessions:
return
session = self._sessions[session_id]
logger.info(f"Cleaning up session: {session_id}")
# Release computer back to pool
await self._computer_pool.release(session.computer)
# Remove session
del self._sessions[session_id]
async def _cleanup_loop(self) -> None:
"""Background task to cleanup idle sessions."""
while not self._shutdown_event.is_set():
try:
await asyncio.sleep(60) # Run cleanup every minute
current_time = time.time()
idle_timeout = 600.0 # 10 minutes
async with self._session_lock:
idle_sessions = []
for session_id, session in self._sessions.items():
if not session.is_shutting_down and not session.active_tasks:
if current_time - session.last_activity > idle_timeout:
idle_sessions.append(session_id)
# Cleanup idle sessions
for session_id in idle_sessions:
await self._force_cleanup_session(session_id)
logger.info(f"Cleaned up idle session: {session_id}")
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"Error in cleanup loop: {e}")
def get_session_stats(self) -> Dict[str, Any]:
"""Get statistics about active sessions."""
async def _get_stats():
async with self._session_lock:
return {
"total_sessions": len(self._sessions),
"max_concurrent": self.max_concurrent_sessions,
"sessions": {
session_id: {
"created_at": session.created_at,
"last_activity": session.last_activity,
"active_tasks": len(session.active_tasks),
"is_shutting_down": session.is_shutting_down
}
for session_id, session in self._sessions.items()
}
}
# Run in current event loop or create new one
try:
loop = asyncio.get_running_loop()
return asyncio.run_coroutine_threadsafe(_get_stats(), loop).result()
except RuntimeError:
# No event loop running, create a new one
return asyncio.run(_get_stats())
# Global session manager instance
_session_manager: Optional[SessionManager] = None
def get_session_manager() -> SessionManager:
"""Get the global session manager instance."""
global _session_manager
if _session_manager is None:
_session_manager = SessionManager()
return _session_manager
async def initialize_session_manager() -> None:
"""Initialize the global session manager."""
global _session_manager
if _session_manager is None:
_session_manager = SessionManager()
await _session_manager.start()
return _session_manager
async def shutdown_session_manager() -> None:
"""Shutdown the global session manager."""
global _session_manager
if _session_manager is not None:
await _session_manager.stop()
_session_manager = None

4465
libs/python/mcp-server/pdm.lock generated Normal file

File diff suppressed because it is too large Load Diff