diff --git a/.bivvy/abcd-climb.md b/.bivvy/abcd-climb.md
deleted file mode 100644
index 72ca30a36..000000000
--- a/.bivvy/abcd-climb.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-id: abcd
-type: feature
-description: This is an example Climb
----
-## Example PRD
-
-TODO
\ No newline at end of file
diff --git a/.bivvy/abcd-moves.json b/.bivvy/abcd-moves.json
deleted file mode 100644
index 3f84260c1..000000000
--- a/.bivvy/abcd-moves.json
+++ /dev/null
@@ -1,21 +0,0 @@
-{
- "climb": "0000",
- "moves": [
- {
- "status": "complete",
- "description": "install the dependencies",
- "details": "install the deps listed as New Dependencies"
- }, {
- "status": "skip",
- "description": "Write tests"
- }, {
- "status": "climbing",
- "description": "Build the first part of the feature",
- "rest": "true"
- }, {
- "status": "todo",
- "description": "Build the last part of the feature",
- "details": "After this, you'd ask the user if they want to return to write tests"
- }
- ]
-}
\ No newline at end of file
diff --git a/.bivvy/k8P2-climb.md b/.bivvy/k8P2-climb.md
deleted file mode 100644
index 10d3dd9dd..000000000
--- a/.bivvy/k8P2-climb.md
+++ /dev/null
@@ -1,139 +0,0 @@
-**STARTFILE k8P2-climb.md**
-
-
- k8P2
- bug
- Fix RClone backup jobs not appearing in jobs list and missing status data
-
- None - this is a bug fix for existing functionality
- None - working with existing backup service implementation
-
- - api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts (main RClone API service)
- - api/src/unraid-api/graph/resolvers/backup/backup-mutations.resolver.ts (backup mutations)
- - web/components/Backup/BackupOverview.vue (frontend backup overview)
- - web/components/Backup/backup-jobs.query.ts (GraphQL query for jobs)
- - api/src/unraid-api/graph/resolvers/backup/backup-queries.resolver.ts (backup queries resolver)
-
-
-## Problem Statement
-
-The newly implemented backup service has two critical issues:
-1. **Jobs not appearing in non-system jobs list**: When users trigger backup jobs via the "Run Now" button in BackupOverview.vue, these jobs are not showing up in the jobs list query, even when `showSystemJobs: false`
-2. **Missing job status data**: Jobs that are started don't return proper status information, making it impossible to track backup progress
-
-## Background
-
-This issue emerged immediately after implementing the new backup service. The backup functionality uses:
-- RClone RC daemon for job execution via Unix socket
-- GraphQL mutations for triggering backups (`triggerJob`, `initiateBackup`)
-- Job grouping system with groups like `backup/manual` and `backup/${id}`
-- Vue.js frontend with real-time job status monitoring
-
-## Root Cause Analysis Areas
-
-### 1. Job Group Classification
-The current implementation sets job groups as:
-- `backup/manual` for manual backups
-- `backup/${id}` for configured job backups
-
-**Potential Issue**: The jobs query may be filtering these groups incorrectly, classifying user-initiated backups as "system jobs"
-
-### 2. RClone API Response Handling
-**Potential Issue**: The `startBackup` method may not be properly handling or returning job metadata from RClone RC API responses
-
-### 3. Job Status Synchronization
-**Potential Issue**: There may be a disconnect between job initiation and the jobs listing/status APIs
-
-### 4. Logging Deficiency
-**Current Gap**: Insufficient logging around RClone API responses makes debugging difficult
-
-## Technical Requirements
-
-### Enhanced Logging
-- Add comprehensive debug logging for all RClone API calls and responses
-- Log job initiation parameters and returned job metadata
-- Log job listing and filtering logic
-- Add structured logging for job group classification
-
-### Job Classification Fix
-- Ensure user-initiated backup jobs are properly classified as non-system jobs
-- Review and fix job group filtering logic in the jobs query resolver
-- Validate that job groups `backup/manual` and `backup/${id}` are treated as non-system
-
-### Status Data Flow
-- Verify job ID propagation from RClone startBackup response
-- Ensure job status API correctly retrieves and formats status data
-- Fix any data transformation issues between RClone API and GraphQL responses
-
-### Data Model Consistency
-- Ensure BackupJob GraphQL type includes all necessary fields (note: current linter error shows missing 'type' field)
-- Verify job data structure consistency between API and frontend
-
-## Acceptance Criteria
-
-### Primary Fixes
-1. **Jobs Visibility**: User-triggered backup jobs appear in the jobs list when `showSystemJobs: false`
-2. **Status Data**: Job status data (progress, speed, ETA, etc.) is properly retrieved and displayed
-3. **Job ID Tracking**: Job IDs are properly returned and can be used for status queries
-
-### Secondary Improvements
-4. **Enhanced Logging**: Comprehensive logging for debugging RClone interactions
-5. **Type Safety**: Fix TypeScript/linting errors in BackupOverview.vue
-6. **System Jobs Investigation**: Document findings about excessive system jobs
-
-## Testing Approach
-
-### Manual Testing
-1. Trigger backup via "Run Now" button in BackupOverview.vue
-2. Verify job appears in running jobs list (with showSystemJobs: false)
-3. Confirm job status data displays correctly (progress, speed, etc.)
-4. Test both `triggerJob` (configured jobs) and `initiateBackup` (manual jobs) flows
-
-### API Testing
-1. Verify RClone API responses contain expected job metadata
-2. Test job listing API with various group filters
-3. Validate job status API returns complete data
-
-### Edge Cases
-1. Test behavior when RClone daemon is restarted
-2. Test concurrent backup jobs
-3. Test backup job cancellation/completion scenarios
-
-## Implementation Strategy
-
-### Phase 1: Debugging & Logging
-- Add comprehensive logging to RClone API service
-- Log all API responses and job metadata
-- Add logging to job filtering logic
-
-### Phase 2: Job Classification Fix
-- Fix job group filtering in backup queries resolver
-- Ensure proper non-system job classification
-- Test job visibility in frontend
-
-### Phase 3: Status Data Fix
-- Fix job status data retrieval and formatting
-- Ensure complete job metadata is available
-- Fix TypeScript/GraphQL type issues
-
-### Phase 4: Validation & Testing
-- Comprehensive testing of backup job lifecycle
-- Validate all acceptance criteria
-- Document system jobs investigation findings
-
-## Security Considerations
-- Ensure logging doesn't expose sensitive backup configuration data
-- Maintain proper authentication/authorization for backup operations
-- Validate that job status queries don't leak information between users
-
-## Performance Considerations
-- Ensure logging doesn't significantly impact performance
-- Optimize job listing queries if necessary
-- Consider caching strategies for frequently accessed job data
-
-## Known Constraints
-- Must work with existing RClone RC daemon setup
-- Cannot break existing backup functionality during fixes
-- Must maintain backward compatibility with existing backup configurations
-
-**ENDFILE**
\ No newline at end of file
diff --git a/.bivvy/k8P2-moves.json b/.bivvy/k8P2-moves.json
deleted file mode 100644
index 9c3cf53a4..000000000
--- a/.bivvy/k8P2-moves.json
+++ /dev/null
@@ -1,53 +0,0 @@
-{
- "Climb": "k8P2",
- "moves": [
- {
- "status": "complete",
- "description": "Investigate current backup jobs query resolver implementation",
- "details": "Find and examine the backup-queries.resolver.ts to understand how jobs are currently filtered and what determines system vs non-system jobs"
- },
- {
- "status": "complete",
- "description": "Add enhanced logging to RClone API service",
- "details": "Add comprehensive debug logging to startBackup, listRunningJobs, and getJobStatus methods in rclone-api.service.ts to capture API responses and job metadata"
- },
- {
- "status": "complete",
- "description": "Add logging to job filtering logic",
- "details": "Add logging to the backup jobs query resolver to understand how jobs are being classified and filtered",
- "rest": true
- },
- {
- "status": "complete",
- "description": "Fix job group classification in backup queries resolver",
- "details": "Ensure that jobs with groups 'backup/manual' and 'backup/{id}' are properly classified as non-system jobs"
- },
- {
- "status": "complete",
- "description": "Verify job ID propagation from RClone responses",
- "details": "Ensure that job IDs returned from RClone startBackup are properly captured and returned in GraphQL mutations"
- },
- {
- "status": "todo",
- "description": "Fix job status data retrieval and formatting",
- "details": "Ensure getJobStatus properly retrieves and formats all status data (progress, speed, ETA, etc.) for display in the frontend",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Fix TypeScript errors in BackupOverview.vue",
- "details": "Add missing 'type' field to BackupJob GraphQL type and fix any other type inconsistencies"
- },
- {
- "status": "todo",
- "description": "Test job visibility and status data end-to-end",
- "details": "Manually test triggering backup jobs via 'Run Now' button and verify they appear in jobs list with proper status data",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Document system jobs investigation findings",
- "details": "Investigate why there are many system jobs running and document findings for potential future work"
- }
- ]
-}
\ No newline at end of file
diff --git a/.bivvy/m9X4-climb.md b/.bivvy/m9X4-climb.md
deleted file mode 100644
index 51bd1bc08..000000000
--- a/.bivvy/m9X4-climb.md
+++ /dev/null
@@ -1,1412 +0,0 @@
-
-
- m9X4
- feature
- Add preprocessing capabilities to backup jobs for ZFS pools, flash backups, Docker containers, and custom scripts
-
- None - will use existing system commands and utilities
- None - extends existing backup system
-
- - api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts (main service)
- - api/src/unraid-api/graph/resolvers/backup/backup.model.ts (GraphQL models)
- - plugin/source/dynamix.unraid.net/usr/local/emhttp/plugins/dynamix.my.servers/include/UpdateFlashBackup.php (flash backup reference)
- - web/components/Backup/ (UI components)
-
-
-
-## Feature Overview
-
-**Feature Name**: Backup Job Preprocessing System
-**Purpose**: Enable backup jobs to run preprocessing steps before the actual backup operation, supporting specialized backup scenarios like ZFS snapshots, flash drive backups, Docker container management, and custom user scripts.
-
-**Problem Being Solved**:
-- Current backup system only supports direct file/folder backups via rclone
-- ZFS pools need snapshot creation before backup
-- Flash drive backups require git repository compression
-- Users need ability to run custom preparation scripts
-
-**Success Metrics**:
-- Backup jobs can successfully execute preprocessing steps
-- ZFS snapshot backups work reliably
-- Flash backup integration functions correctly
-- Docker container backup workflows complete without data corruption
-- Custom scripts execute safely in isolated environments
-
-## Requirements
-
-### Functional Requirements
-
-**Core Preprocessing Types**:
-1. **ZFS Snapshot**: Create ZFS snapshot, stream snapshot data directly to destination
-2. **Flash Backup**: Compress git repository from /boot/.git and stream directly to destination
-3. **Custom Script**: Execute user-provided script for custom preprocessing (non-streaming)
-4. **None**: Direct backup (current behavior)
-
-**Preprocessing Workflow**:
-1. Execute preprocessing step
-2. For streaming operations: pipe data directly to rclone daemon via rcat
-3. For non-streaming operations: update sourcePath to preprocessed location
-4. Execute cleanup/postprocessing if required
-5. Log all steps and handle errors gracefully
-
-**Configuration Options**:
-- Preprocessing type selection
-- Type-specific parameters (ZFS pool name, Docker container name, script path)
-- Streaming vs file-based backup mode
-- Timeout settings for preprocessing steps
-- Cleanup behavior configuration
-
-### Technical Requirements
-
-**Performance**: Preprocessing should complete within reasonable timeframes (configurable timeouts)
-**Security**: Custom scripts run in controlled environment with limited permissions
-**Reliability**: Failed preprocessing should not leave system in inconsistent state
-**Logging**: Comprehensive logging of all preprocessing steps
-**Streaming**: Leverage rclone daemon's streaming capabilities for efficient data transfer
-
-### User Requirements
-
-**Configuration UI**: Simple dropdown to select preprocessing type with dynamic form fields
-**Status Visibility**: Clear indication of preprocessing status in job monitoring
-**Error Handling**: Meaningful error messages for preprocessing failures
-
-## Design and Implementation
-
-### Data Model Changes
-
-**Internal DTO Classes for Validation** (not exposed via GraphQL):
-```typescript
-import {
- IsString,
- IsOptional,
- IsBoolean,
- IsNumber,
- IsArray,
- IsEnum,
- IsPositive,
- Min,
- Max,
- ValidateNested,
- IsNotEmpty,
- Matches
-} from 'class-validator';
-import { Type, Transform } from 'class-transformer';
-
-export enum PreprocessType {
- NONE = 'none',
- ZFS = 'zfs',
- FLASH = 'flash',
- SCRIPT = 'script'
-}
-
-export class ZfsPreprocessConfigDto {
- @IsString()
- @IsNotEmpty()
- @Matches(/^[a-zA-Z0-9_\-\/]+$/, { message: 'Pool name must contain only alphanumeric characters, underscores, hyphens, and forward slashes' })
- poolName!: string;
-
- @IsOptional()
- @IsString()
- @Matches(/^[a-zA-Z0-9_\-]+$/, { message: 'Snapshot name must contain only alphanumeric characters, underscores, and hyphens' })
- snapshotName?: string;
-
- @IsOptional()
- @IsBoolean()
- @Transform(({ value }) => value !== false)
- streamDirect?: boolean = true;
-
- @IsOptional()
- @IsNumber()
- @Min(1)
- @Max(9)
- compressionLevel?: number;
-}
-
-export class FlashPreprocessConfigDto {
- @IsOptional()
- @IsString()
- @Matches(/^\/[a-zA-Z0-9_\-\/\.]+$/, { message: 'Git path must be an absolute path' })
- gitPath?: string = '/boot/.git';
-
- @IsOptional()
- @IsNumber()
- @Min(1)
- @Max(9)
- compressionLevel?: number;
-
- @IsOptional()
- @IsBoolean()
- @Transform(({ value }) => value !== false)
- streamDirect?: boolean = true;
-
- @IsOptional()
- @IsString()
- @Matches(/^\/[a-zA-Z0-9_\-\/\.]+$/, { message: 'Local cache path must be an absolute path' })
- localCachePath?: string;
-
- @IsOptional()
- @IsString()
- @IsNotEmpty()
- commitMessage?: string;
-
- @IsOptional()
- @IsBoolean()
- @Transform(({ value }) => value !== false)
- includeGitHistory?: boolean = true;
-}
-
-export class ScriptPreprocessConfigDto {
- @IsString()
- @IsNotEmpty()
- @Matches(/^\/[a-zA-Z0-9_\-\/\.]+$/, { message: 'Script path must be an absolute path' })
- scriptPath!: string;
-
- @IsOptional()
- @IsArray()
- @IsString({ each: true })
- scriptArgs?: string[];
-
- @IsOptional()
- @IsString()
- @Matches(/^\/[a-zA-Z0-9_\-\/\.]*$/, { message: 'Working directory must be an absolute path' })
- workingDirectory?: string;
-
- @IsOptional()
- @IsNumber()
- @IsPositive()
- @Max(3600)
- timeout?: number;
-}
-
-export class PreprocessConfigDto {
- @IsOptional()
- @ValidateNested()
- @Type(() => ZfsPreprocessConfigDto)
- zfs?: ZfsPreprocessConfigDto;
-
- @IsOptional()
- @ValidateNested()
- @Type(() => FlashPreprocessConfigDto)
- flash?: FlashPreprocessConfigDto;
-
- @IsOptional()
- @ValidateNested()
- @Type(() => ScriptPreprocessConfigDto)
- script?: ScriptPreprocessConfigDto;
-}
-
-// Internal DTO for service layer validation
-export class BackupJobPreprocessDto {
- @IsEnum(PreprocessType)
- preprocessType!: PreprocessType;
-
- @IsOptional()
- @ValidateNested()
- @Type(() => PreprocessConfigDto)
- preprocessConfig?: PreprocessConfigDto;
-
- @IsOptional()
- @IsNumber()
- @IsPositive()
- @Max(3600)
- preprocessTimeout?: number = 300;
-
- @IsOptional()
- @IsBoolean()
- cleanupOnFailure?: boolean = true;
-}
-
-**Extended BackupJobConfigData Interface** (internal):
-```typescript
-interface BackupJobConfigData {
- // ... existing fields
- preprocessType?: 'none' | 'zfs' | 'flash' | 'script';
- preprocessConfig?: {
- zfs?: {
- poolName: string;
- snapshotName?: string;
- streamDirect?: boolean;
- compressionLevel?: number;
- };
- flash?: {
- gitPath?: string;
- compressionLevel?: number;
- streamDirect?: boolean;
- localCachePath?: string;
- commitMessage?: string;
- includeGitHistory?: boolean;
- };
- script?: {
- scriptPath: string;
- scriptArgs?: string[];
- workingDirectory?: string;
- timeout?: number;
- };
- };
- preprocessTimeout?: number;
- cleanupOnFailure?: boolean;
-}
-```
-
-**GraphQL Schema Extensions** (only expose what UI needs):
-```typescript
-import { InputType, Field, registerEnumType } from '@nestjs/graphql';
-import { GraphQLJSON } from 'graphql-scalars';
-
-registerEnumType(PreprocessType, {
- name: 'PreprocessType',
- description: 'Type of preprocessing to perform before backup'
-});
-
-// Extend existing BackupJobConfig ObjectType
-@ObjectType({
- implements: () => Node,
-})
-export class BackupJobConfig extends Node {
- // ... existing fields
-
- @Field(() => PreprocessType, { nullable: true, defaultValue: PreprocessType.NONE })
- preprocessType?: PreprocessType;
-
- @Field(() => GraphQLJSON, { nullable: true, description: 'Preprocessing configuration' })
- preprocessConfig?: Record;
-
- @Field(() => Number, { nullable: true, description: 'Preprocessing timeout in seconds' })
- preprocessTimeout?: number;
-
- @Field(() => Boolean, { nullable: true, description: 'Cleanup on failure' })
- cleanupOnFailure?: boolean;
-}
-
-// Extend existing input types
-@InputType()
-export class CreateBackupJobConfigInput {
- // ... existing fields
-
- @Field(() => PreprocessType, { nullable: true, defaultValue: PreprocessType.NONE })
- @IsOptional()
- @IsEnum(PreprocessType)
- preprocessType?: PreprocessType;
-
- @Field(() => GraphQLJSON, { nullable: true })
- @IsOptional()
- @IsObject()
- preprocessConfig?: Record;
-
- @Field(() => Number, { nullable: true, defaultValue: 300 })
- @IsOptional()
- @IsNumber()
- @IsPositive()
- @Max(3600)
- preprocessTimeout?: number;
-
- @Field(() => Boolean, { nullable: true, defaultValue: true })
- @IsOptional()
- @IsBoolean()
- cleanupOnFailure?: boolean;
-}
-
-@InputType()
-export class UpdateBackupJobConfigInput {
- // ... existing fields
-
- @Field(() => PreprocessType, { nullable: true })
- @IsOptional()
- @IsEnum(PreprocessType)
- preprocessType?: PreprocessType;
-
- @Field(() => GraphQLJSON, { nullable: true })
- @IsOptional()
- @IsObject()
- preprocessConfig?: Record;
-
- @Field(() => Number, { nullable: true })
- @IsOptional()
- @IsNumber()
- @IsPositive()
- @Max(3600)
- preprocessTimeout?: number;
-
- @Field(() => Boolean, { nullable: true })
- @IsOptional()
- @IsBoolean()
- cleanupOnFailure?: boolean;
-}
-```
-
-**Validation Service for Business Logic**:
-```typescript
-@Injectable()
-export class PreprocessConfigValidationService {
-
- async validateAndTransform(input: any): Promise {
- // Transform to DTO and validate
- const dto = plainToClass(BackupJobPreprocessDto, input);
- const validationErrors = await validate(dto);
-
- if (validationErrors.length > 0) {
- const errorMessages = validationErrors
- .map(error => Object.values(error.constraints || {}).join(', '))
- .join('; ');
- throw new BadRequestException(`Validation failed: ${errorMessages}`);
- }
-
- // Custom business logic validation
- const businessErrors = this.validateBusinessRules(dto);
- if (businessErrors.length > 0) {
- throw new BadRequestException(`Configuration errors: ${businessErrors.join('; ')}`);
- }
-
- // Additional async validations
- await this.validateAsyncRules(dto);
-
- return dto;
- }
-
- private validateBusinessRules(dto: BackupJobPreprocessDto): string[] {
- const errors: string[] = [];
-
- // Ensure config matches type
- if (dto.preprocessType !== PreprocessType.NONE && !dto.preprocessConfig) {
- errors.push('Preprocessing configuration is required when preprocessType is not "none"');
- }
-
- if (dto.preprocessType === PreprocessType.ZFS && !dto.preprocessConfig?.zfs) {
- errors.push('ZFS configuration is required when preprocessType is "zfs"');
- }
-
- if (dto.preprocessType === PreprocessType.FLASH && !dto.preprocessConfig?.flash) {
- errors.push('Flash configuration is required when preprocessType is "flash"');
- }
-
- if (dto.preprocessType === PreprocessType.SCRIPT && !dto.preprocessConfig?.script) {
- errors.push('Script configuration is required when preprocessType is "script"');
- }
-
- // Flash-specific validations
- if (dto.preprocessConfig?.flash) {
- const flashConfig = dto.preprocessConfig.flash;
-
- if (flashConfig.localCachePath && flashConfig.streamDirect !== false) {
- errors.push('localCachePath can only be used when streamDirect is false');
- }
-
- if (flashConfig.gitPath && !flashConfig.gitPath.endsWith('/.git')) {
- errors.push('Git path should end with "/.git"');
- }
- }
-
- // ZFS-specific validations
- if (dto.preprocessConfig?.zfs) {
- const zfsConfig = dto.preprocessConfig.zfs;
-
- if (zfsConfig.poolName.includes('..') || zfsConfig.poolName.startsWith('/')) {
- errors.push('Invalid ZFS pool name format');
- }
- }
-
- // Script-specific validations
- if (dto.preprocessConfig?.script) {
- const scriptConfig = dto.preprocessConfig.script;
-
- if (!scriptConfig.scriptPath.match(/\.(sh|py|pl|js)$/)) {
- errors.push('Script must have a valid extension (.sh, .py, .pl, .js)');
- }
-
- if (scriptConfig.scriptArgs?.some(arg => arg.includes(';') || arg.includes('|') || arg.includes('&'))) {
- errors.push('Script arguments cannot contain shell operators (;, |, &)');
- }
- }
-
- return errors;
- }
-
- private async validateAsyncRules(dto: BackupJobPreprocessDto): Promise {
- if (dto.preprocessType === PreprocessType.ZFS && dto.preprocessConfig?.zfs) {
- const poolExists = await this.validateZfsPool(dto.preprocessConfig.zfs.poolName);
- if (!poolExists) {
- throw new BadRequestException(`ZFS pool '${dto.preprocessConfig.zfs.poolName}' does not exist`);
- }
- }
-
- if (dto.preprocessType === PreprocessType.SCRIPT && dto.preprocessConfig?.script) {
- const scriptExists = await this.validateScriptExists(dto.preprocessConfig.script.scriptPath);
- if (!scriptExists) {
- throw new BadRequestException(`Script '${dto.preprocessConfig.script.scriptPath}' does not exist or is not executable`);
- }
- }
- }
-
- async validateZfsPool(poolName: string): Promise {
- // Implementation would check if ZFS pool exists
- return true;
- }
-
- async validateScriptExists(scriptPath: string): Promise {
- // Implementation would check if script file exists and is executable
- return true;
- }
-}
-```
-
-**Service Integration**:
-```typescript
-// In BackupConfigService
-constructor(
- private readonly rcloneService: RCloneService,
- private readonly schedulerRegistry: SchedulerRegistry,
- private readonly preprocessValidationService: PreprocessConfigValidationService,
- private readonly preprocessingService: PreprocessingService
-) {
- // ... existing constructor logic
-}
-
-async createBackupJobConfig(input: CreateBackupJobConfigInput): Promise {
- // Validate preprocessing config if provided
- if (input.preprocessType && input.preprocessType !== PreprocessType.NONE) {
- await this.preprocessValidationService.validateAndTransform({
- preprocessType: input.preprocessType,
- preprocessConfig: input.preprocessConfig,
- preprocessTimeout: input.preprocessTimeout,
- cleanupOnFailure: input.cleanupOnFailure
- });
- }
-
- // ... rest of existing logic
-}
-```
-
-**Key Benefits of This Approach**:
-- **Separation of Concerns**: Internal DTOs handle validation, GraphQL schema only exposes what UI needs
-- **Type Safety**: Full validation on internal DTOs, simple JSON for GraphQL flexibility
-- **Minimal GraphQL Changes**: Only add essential fields to existing schema
-- **Backward Compatibility**: Existing backup jobs continue to work (preprocessType defaults to 'none')
-- **Flexible Configuration**: UI can send any valid JSON, validated internally by DTOs
-- **Future-Proof**: Easy to add new preprocessing types without GraphQL schema changes
-
-### Architecture Overview
-
-**Preprocessing Service**: New service to handle different preprocessing types
-**Streaming Integration**: Direct integration with rclone daemon for streaming operations
-**Job Execution Flow**: Modified to include preprocessing step with streaming support
-**Cleanup Management**: Automatic cleanup of temporary resources
-
-### API Specifications
-
-**New Preprocessing Service Methods**:
-- `executePreprocessing(config, jobId): Promise`
-- `executeStreamingPreprocessing(config, jobId): Promise`
-- `cleanupPreprocessing(config, jobId): Promise`
-- `validatePreprocessConfig(config): ValidationResult`
-
-**PreprocessResult Interface**:
-```typescript
-interface PreprocessResult {
- success: boolean;
- outputPath?: string; // Path to the final backup destination
- localCachePath?: string; // Path to local cache file (if used)
- streaming: boolean; // Whether the operation used streaming
- message: string; // Human-readable status message
- metadata?: {
- gitCommitHash?: string; // For flash backups
- snapshotName?: string; // For ZFS backups
- scriptExitCode?: number; // For custom scripts
- bytesProcessed?: number;
- processingTimeMs?: number;
- };
-}
-```
-
-## Development Details
-
-### Implementation Approach
-
-**Phase 1**: Core preprocessing infrastructure
-- Add preprocessing fields to data models
-- Create base preprocessing service
-- Implement 'none' type (current behavior)
-
-**Phase 2**: RCloneApiService streaming extensions
-- Add `startStreamingBackup()` method to handle rcat subprocess operations
-- Implement streaming job tracking that integrates with existing job system
-- Create streaming job status monitoring (bridge subprocess with daemon job tracking)
-- Add streaming job cancellation capabilities (process management + cleanup)
-- Extend job grouping to include streaming operations under `JOB_GROUP_PREFIX`
-
-**Phase 3**: Streaming job management integration
-- Modify `getAllJobsWithStats()` to include streaming jobs alongside daemon jobs
-- Update `getEnhancedJobStatus()` to handle both daemon and streaming job types
-- Implement streaming job progress monitoring (file size, transfer rate estimation)
-- Add streaming job error handling and retry logic
-- Ensure streaming jobs appear in backup job lists with proper status
-
-**Phase 4**: Flash backup integration (Priority Feature)
-- Local git repository setup and configuration
-- Git filters and exclusions for proper file handling
-- Local commit operations for configuration tracking
-- Git repository streaming compression using `tar cf - /boot/.git | rclone rcat remote:backup.tar`
-- Direct streaming to destination via rclone daemon
-- No temporary local storage required
-- Simplified approach without remote git operations or Unraid Connect dependency
-
-**Phase 5**: ZFS snapshot support
-- ZFS snapshot creation/deletion
-- Streaming via `zfs send | rclone rcat remote:backup`
-- Error handling for ZFS operations
-- Cleanup of temporary snapshots
-
-**Phase 6**: Custom script support
-- Script execution in sandboxed environment
-- File-based output (non-streaming for security)
-- Parameter passing and environment setup
-- Security restrictions and validation
-
-### Streaming Implementation Details
-
-**ZFS Streaming with RClone Daemon API**:
-```typescript
-// Use RCloneApiService.startBackup() with streaming source
-const zfsCommand = `zfs send pool/dataset@backup-timestamp`;
-const destinationPath = `${config.remoteName}:${config.destinationPath}/zfs-backup-timestamp`;
-
-// Stream ZFS data directly to rclone daemon via API
-await this.executeStreamingBackup(zfsCommand, destinationPath, config);
-```
-
-**Flash Backup Streaming with Complete Git Setup**:
-```typescript
-// Simplified flash backup preprocessing without remote git operations
-async executeFlashBackupPreprocessing(config: FlashBackupConfig, jobId: string): Promise {
- try {
- // 1. Initialize/configure local git repository (always done)
- await this.setupLocalGitRepository();
-
- // 2. Configure git filters and exclusions
- await this.configureGitFilters();
-
- // 3. Perform local git operations (add, commit locally only)
- await this.performLocalGitOperations(config.commitMessage || 'Backup via comprehensive backup system');
-
- // 4. Create backup - either streaming or local cache
- if (config.streamDirect !== false) {
- // Stream git repository directly to destination
- const tarCommand = `tar cf - -C /boot .git`;
- const destinationPath = `${config.remoteName}:${config.destinationPath}/flash-backup-${Date.now()}.tar`;
-
- await this.executeStreamingBackup(tarCommand, destinationPath, config);
-
- return {
- success: true,
- outputPath: destinationPath,
- streaming: true,
- message: 'Flash backup streamed successfully to destination'
- };
- } else {
- // Create local backup file first, then upload via rclone
- const localCachePath = config.localCachePath || `/tmp/flash-backup-${Date.now()}.tar`;
- const destinationPath = `${config.remoteName}:${config.destinationPath}/flash-backup-${Date.now()}.tar`;
-
- // Create local tar file
- await this.executeCommand(`tar cf "${localCachePath}" -C /boot .git`);
-
- // Upload via standard rclone
- await this.executeStandardBackup(localCachePath, destinationPath, config);
-
- // Cleanup local cache if it was auto-generated
- if (!config.localCachePath) {
- await this.deleteFile(localCachePath);
- }
-
- return {
- success: true,
- outputPath: destinationPath,
- streaming: false,
- localCachePath: config.localCachePath ? localCachePath : undefined,
- message: 'Flash backup completed successfully via local cache'
- };
- }
-
- } catch (error) {
- this.logger.error(`Flash backup preprocessing failed: ${error.message}`);
- throw new Error(`Flash backup failed: ${error.message}`);
- }
-}
-
-private async setupLocalGitRepository(): Promise {
- // Initialize git repository if needed
- if (!await this.fileExists('/boot/.git/info/exclude')) {
- await this.executeCommand('git init /boot');
- }
-
- // Setup git description
- const varConfig = await this.readConfigFile('/var/local/emhttp/var.ini');
- const serverName = varConfig?.NAME || 'Unknown Server';
- const gitDescText = `Unraid flash drive for ${serverName}\n`;
- const gitDescPath = '/boot/.git/description';
-
- if (!await this.fileExists(gitDescPath) || await this.readFile(gitDescPath) !== gitDescText) {
- await this.writeFile(gitDescPath, gitDescText);
- }
-
- // Configure git user
- await this.setGitConfig('user.email', 'gitbot@unraid.net');
- await this.setGitConfig('user.name', 'gitbot');
-}
-
-private async performLocalGitOperations(commitMessage: string): Promise {
- // Check status
- const { stdout: statusOutput } = await this.executeCommand('git -C /boot status --porcelain');
-
- let needsCommit = false;
- if (statusOutput.trim().length > 0) {
- needsCommit = true;
- } else {
- // Check for uncommitted changes
- const { stdout: diffOutput } = await this.executeCommand('git -C /boot diff --cached --name-only', { allowFailure: true });
- if (diffOutput.trim().length > 0) {
- needsCommit = true;
- }
- }
-
- if (needsCommit) {
- // Remove invalid files from repo
- const { stdout: invalidFiles } = await this.executeCommand('git -C /boot ls-files --cached --ignored --exclude-standard', { allowFailure: true });
- if (invalidFiles.trim()) {
- for (const file of invalidFiles.trim().split('\n')) {
- if (file.trim()) {
- await this.executeCommand(`git -C /boot rm --cached --ignore-unmatch '${file.trim()}'`);
- }
- }
- }
-
- // Add and commit changes locally only
- await this.executeCommand('git -C /boot add -A');
- await this.executeCommand(`git -C /boot commit -m "${commitMessage}"`);
-
- this.logger.log('Local git commit completed for flash backup');
- } else {
- this.logger.log('No changes detected, skipping git commit');
- }
-}
-
-### Streaming Implementation Details
-
-**ZFS Streaming with RClone Daemon API**:
-```typescript
-// Use RCloneApiService.startBackup() with streaming source
-const zfsCommand = `zfs send pool/dataset@backup-timestamp`;
-const destinationPath = `${config.remoteName}:${config.destinationPath}/zfs-backup-timestamp`;
-
-// Stream ZFS data directly to rclone daemon via API
-await this.executeStreamingBackup(zfsCommand, destinationPath, config);
-```
-
-**Flash Backup Streaming with RClone Daemon API**:
-```typescript
-// Stream git archive directly to rclone daemon
-const tarCommand = `tar cf - /boot/.git`;
-const destinationPath = `${config.remoteName}:${config.destinationPath}/flash-backup-timestamp.tar`;
-
-await this.executeStreamingBackup(tarCommand, destinationPath, config);
-```
-
-**Docker Volume Streaming with RClone Daemon API**:
-```typescript
-// Stop container, stream volume data, restart container
-await this.dockerService.stopContainer(config.containerName);
-const dockerCommand = `docker run --rm -v ${config.volumeName}:/data alpine tar cf - /data`;
-const destinationPath = `${config.remoteName}:${config.destinationPath}/docker-backup-timestamp.tar`;
-
-await this.executeStreamingBackup(dockerCommand, destinationPath, config);
-await this.dockerService.startContainer(config.containerName);
-```
-
-**Implementation Notes**:
-- **Hybrid Approach**: Use direct `rclone rcat` calls for streaming operations, daemon API for everything else
-- **Streaming Method**: Direct `rclone rcat` subprocess with piped input from preprocessing commands
-- **Job Management**: Leverage existing RCloneApiService for configuration, monitoring, and job tracking
-- **Compression Handling**: User configures compress remote in UI, we just use their chosen remote
-- **Error Handling**: Combine subprocess error handling with existing RCloneApiService retry logic
-- **Process Management**: Proper cleanup of streaming subprocesses and monitoring integration
-
-**API Integration Points**:
-- `RCloneApiService.getRemoteConfig()` for validating user's remote configuration
-- `RCloneApiService.getEnhancedJobStatus()` for monitoring progress (if possible to correlate)
-- `RCloneApiService.stopJob()` for cancellation (may need custom process management)
-- Existing job grouping with `JOB_GROUP_PREFIX` for backup jobs
-- Custom subprocess management for streaming operations
-
-### Subprocess Lifecycle Management
-
-**Process Tracking**:
-```typescript
-interface StreamingJobProcess {
- jobId: string;
- configId: string;
- subprocess: ChildProcess;
- startTime: Date;
- command: string;
- destinationPath: string;
- status: 'starting' | 'running' | 'completed' | 'failed' | 'cancelled';
- bytesTransferred?: number;
- error?: string;
-}
-
-class StreamingJobManager {
- private activeProcesses = new Map();
- private readonly logger = new Logger(StreamingJobManager.name);
-
- async startStreamingJob(command: string, destination: string, configId: string): Promise {
- const jobId = `stream-${uuidv4()}`;
- const subprocess = spawn('sh', ['-c', `${command} | rclone rcat ${destination}`]);
-
- const processInfo: StreamingJobProcess = {
- jobId,
- configId,
- subprocess,
- startTime: new Date(),
- command,
- destinationPath: destination,
- status: 'starting'
- };
-
- this.activeProcesses.set(jobId, processInfo);
- this.setupProcessHandlers(processInfo);
- return jobId;
- }
-
- private setupProcessHandlers(processInfo: StreamingJobProcess): void {
- const { subprocess, jobId } = processInfo;
-
- subprocess.on('spawn', () => {
- processInfo.status = 'running';
- this.logger.log(`Streaming job ${jobId} started successfully`);
- });
-
- subprocess.on('exit', (code, signal) => {
- if (signal === 'SIGTERM' || signal === 'SIGKILL') {
- processInfo.status = 'cancelled';
- } else if (code === 0) {
- processInfo.status = 'completed';
- } else {
- processInfo.status = 'failed';
- processInfo.error = `Process exited with code ${code}`;
- }
-
- this.logger.log(`Streaming job ${jobId} finished with status: ${processInfo.status}`);
- // Keep process info for status queries, cleanup after timeout
- setTimeout(() => this.activeProcesses.delete(jobId), 300000); // 5 minutes
- });
-
- subprocess.on('error', (error) => {
- processInfo.status = 'failed';
- processInfo.error = error.message;
- this.logger.error(`Streaming job ${jobId} failed:`, error);
- });
- }
-
- async stopStreamingJob(jobId: string): Promise {
- const processInfo = this.activeProcesses.get(jobId);
- if (!processInfo || processInfo.status === 'completed' || processInfo.status === 'failed') {
- return false;
- }
-
- processInfo.status = 'cancelled';
- processInfo.subprocess.kill('SIGTERM');
-
- // Force kill after 10 seconds if still running
- setTimeout(() => {
- if (!processInfo.subprocess.killed) {
- processInfo.subprocess.kill('SIGKILL');
- }
- }, 10000);
-
- return true;
- }
-}
-```
-
-**Service Shutdown Cleanup**:
-```typescript
-async onModuleDestroy(): Promise {
- this.logger.log('Cleaning up streaming processes...');
-
- const activeJobs = Array.from(this.activeProcesses.values())
- .filter(p => p.status === 'running' || p.status === 'starting');
-
- if (activeJobs.length > 0) {
- this.logger.log(`Terminating ${activeJobs.length} active streaming jobs`);
-
- // Graceful termination
- activeJobs.forEach(job => job.subprocess.kill('SIGTERM'));
-
- // Wait up to 5 seconds for graceful shutdown
- await new Promise(resolve => setTimeout(resolve, 5000));
-
- // Force kill any remaining processes
- activeJobs.forEach(job => {
- if (!job.subprocess.killed) {
- job.subprocess.kill('SIGKILL');
- }
- });
- }
-}
-```
-
-### Job Status Correlation
-
-**Unified Job Status System**:
-```typescript
-interface UnifiedJobStatus {
- id: string;
- type: 'daemon' | 'streaming';
- configId?: string;
- status: 'running' | 'completed' | 'failed' | 'cancelled';
- progress?: {
- bytesTransferred: number;
- totalBytes?: number;
- transferRate: number;
- eta?: number;
- };
- startTime: Date;
- endTime?: Date;
- error?: string;
-}
-
-async getAllJobsWithStats(): Promise {
- // Get existing daemon jobs
- const daemonJobs = await this.getExistingDaemonJobs();
-
- // Get streaming jobs and convert to RCloneJob format
- const streamingJobs = Array.from(this.streamingManager.activeProcesses.values())
- .filter(p => p.status === 'running' || p.status === 'starting')
- .map(p => this.convertStreamingToRCloneJob(p));
-
- return [...daemonJobs, ...streamingJobs];
-}
-
-private convertStreamingToRCloneJob(processInfo: StreamingJobProcess): RCloneJob {
- return {
- id: processInfo.jobId,
- configId: processInfo.configId,
- status: this.mapStreamingStatus(processInfo.status),
- group: `${JOB_GROUP_PREFIX}${processInfo.configId}`,
- startTime: processInfo.startTime.toISOString(),
- stats: {
- bytes: processInfo.bytesTransferred || 0,
- speed: this.estimateTransferRate(processInfo),
- eta: null, // Streaming jobs don't have reliable ETA
- transferring: processInfo.status === 'running' ? [processInfo.destinationPath] : [],
- checking: [],
- errors: processInfo.error ? 1 : 0,
- fatalError: processInfo.status === 'failed',
- finished: processInfo.status === 'completed' || processInfo.status === 'failed'
- }
- };
-}
-```
-
-**Progress Monitoring for Streaming Jobs**:
-```typescript
-private estimateTransferRate(processInfo: StreamingJobProcess): number {
- if (!processInfo.bytesTransferred || processInfo.status !== 'running') {
- return 0;
- }
-
- const elapsedSeconds = (Date.now() - processInfo.startTime.getTime()) / 1000;
- return elapsedSeconds > 0 ? processInfo.bytesTransferred / elapsedSeconds : 0;
-}
-
-// Monitor subprocess output to track progress
-private setupProgressMonitoring(processInfo: StreamingJobProcess): void {
- let lastProgressUpdate = Date.now();
-
- processInfo.subprocess.stderr?.on('data', (data) => {
- const output = data.toString();
-
- // Parse rclone progress output (if available)
- const progressMatch = output.match(/Transferred:\s+(\d+(?:\.\d+)?)\s*(\w+)/);
- if (progressMatch) {
- const [, amount, unit] = progressMatch;
- processInfo.bytesTransferred = this.parseBytes(amount, unit);
- lastProgressUpdate = Date.now();
- }
- });
-
- // Fallback: estimate progress based on time for jobs without progress output
- const progressEstimator = setInterval(() => {
- if (processInfo.status !== 'running') {
- clearInterval(progressEstimator);
- return;
- }
-
- // If no progress updates for 30 seconds, job might be stalled
- if (Date.now() - lastProgressUpdate > 30000) {
- this.logger.warn(`No progress updates for streaming job ${processInfo.jobId} for 30 seconds`);
- }
- }, 10000);
-}
-```
-
-### Error Recovery and Retry Logic
-
-**Streaming-Specific Error Handling**:
-```typescript
-async executeStreamingBackup(command: string, destination: string, config: any): Promise {
- const maxRetries = 3;
- let attempt = 0;
-
- while (attempt < maxRetries) {
- try {
- const jobId = await this.streamingManager.startStreamingJob(command, destination, config.id);
- await this.waitForStreamingCompletion(jobId);
- return; // Success
-
- } catch (error) {
- attempt++;
- this.logger.warn(`Streaming backup attempt ${attempt} failed:`, error);
-
- if (attempt >= maxRetries) {
- throw new Error(`Streaming backup failed after ${maxRetries} attempts: ${error.message}`);
- }
-
- // Exponential backoff
- const delay = Math.min(1000 * Math.pow(2, attempt - 1), 30000);
- await new Promise(resolve => setTimeout(resolve, delay));
- }
- }
-}
-
-private async waitForStreamingCompletion(jobId: string): Promise {
- return new Promise((resolve, reject) => {
- const checkStatus = () => {
- const processInfo = this.streamingManager.activeProcesses.get(jobId);
-
- if (!processInfo) {
- reject(new Error(`Streaming job ${jobId} not found`));
- return;
- }
-
- switch (processInfo.status) {
- case 'completed':
- resolve();
- break;
- case 'failed':
- reject(new Error(processInfo.error || 'Streaming job failed'));
- break;
- case 'cancelled':
- reject(new Error('Streaming job was cancelled'));
- break;
- default:
- // Still running, check again in 1 second
- setTimeout(checkStatus, 1000);
- }
- };
-
- checkStatus();
- });
-}
-
-// Handle partial stream failures
-private async handleStreamingFailure(processInfo: StreamingJobProcess): Promise {
- this.logger.error(`Streaming job ${processInfo.jobId} failed, attempting cleanup`);
-
- // Kill subprocess if still running
- if (!processInfo.subprocess.killed) {
- processInfo.subprocess.kill('SIGTERM');
- }
-
- // Check if partial data was uploaded and needs cleanup
- try {
- // Attempt to remove partial upload from destination
- await this.cleanupPartialUpload(processInfo.destinationPath);
- } catch (cleanupError) {
- this.logger.warn(`Failed to cleanup partial upload: ${cleanupError.message}`);
- }
-}
-```
-
-### Concurrency Management
-
-**Resource Limits and Throttling**:
-```typescript
-interface ConcurrencyConfig {
- maxConcurrentStreaming: number;
- maxConcurrentPerConfig: number;
- maxTotalBandwidth: number; // bytes per second
- queueTimeout: number; // milliseconds
-}
-
-class ConcurrencyManager {
- private readonly config: ConcurrencyConfig = {
- maxConcurrentStreaming: 3,
- maxConcurrentPerConfig: 1,
- maxTotalBandwidth: 100 * 1024 * 1024, // 100 MB/s
- queueTimeout: 300000 // 5 minutes
- };
-
- private readonly jobQueue: Array<{
- configId: string;
- command: string;
- destination: string;
- resolve: (jobId: string) => void;
- reject: (error: Error) => void;
- queuedAt: Date;
- }> = [];
-
- async queueStreamingJob(command: string, destination: string, configId: string): Promise {
- // Check immediate availability
- if (this.canStartImmediately(configId)) {
- return this.streamingManager.startStreamingJob(command, destination, configId);
- }
-
- // Queue the job
- return new Promise((resolve, reject) => {
- this.jobQueue.push({
- configId,
- command,
- destination,
- resolve,
- reject,
- queuedAt: new Date()
- });
-
- // Set timeout for queued job
- setTimeout(() => {
- const index = this.jobQueue.findIndex(job => job.resolve === resolve);
- if (index !== -1) {
- this.jobQueue.splice(index, 1);
- reject(new Error('Job timed out in queue'));
- }
- }, this.config.queueTimeout);
-
- this.processQueue();
- });
- }
-
- private canStartImmediately(configId: string): boolean {
- const activeJobs = Array.from(this.streamingManager.activeProcesses.values())
- .filter(p => p.status === 'running' || p.status === 'starting');
-
- // Check global concurrent limit
- if (activeJobs.length >= this.config.maxConcurrentStreaming) {
- return false;
- }
-
- // Check per-config limit
- const configJobs = activeJobs.filter(p => p.configId === configId);
- if (configJobs.length >= this.config.maxConcurrentPerConfig) {
- return false;
- }
-
- // Check bandwidth usage
- const totalBandwidth = activeJobs.reduce((sum, job) =>
- sum + this.estimateTransferRate(job), 0);
- if (totalBandwidth >= this.config.maxTotalBandwidth) {
- return false;
- }
-
- return true;
- }
-
- private async processQueue(): Promise {
- while (this.jobQueue.length > 0) {
- const job = this.jobQueue[0];
-
- // Remove expired jobs
- if (Date.now() - job.queuedAt.getTime() > this.config.queueTimeout) {
- this.jobQueue.shift();
- job.reject(new Error('Job expired in queue'));
- continue;
- }
-
- if (this.canStartImmediately(job.configId)) {
- this.jobQueue.shift();
- try {
- const jobId = await this.streamingManager.startStreamingJob(
- job.command,
- job.destination,
- job.configId
- );
- job.resolve(jobId);
- } catch (error) {
- job.reject(error);
- }
- } else {
- break; // Can't start any more jobs right now
- }
- }
- }
-
- // Called when streaming jobs complete to process queue
- onStreamingJobComplete(): void {
- this.processQueue();
- }
-}
-```
-
-**Integration with Existing Job Grouping**:
-```typescript
-// Extend existing job grouping to include streaming operations
-async stopJob(jobId: string): Promise {
- // Check if this is a streaming job
- if (jobId.startsWith('stream-')) {
- const success = await this.streamingManager.stopStreamingJob(jobId);
- return {
- stopped: success ? [jobId] : [],
- errors: success ? [] : [`Failed to stop streaming job ${jobId}`]
- };
- }
-
- // Handle daemon jobs and groups as before
- if (jobId.startsWith(JOB_GROUP_PREFIX)) {
- // Stop all jobs in the group (both daemon and streaming)
- const groupJobs = await this.getJobsInGroup(jobId);
- const results = await Promise.allSettled(
- groupJobs.map(job => this.stopJob(job.id))
- );
-
- return this.aggregateStopResults(results);
- }
-
- // Regular daemon job
- return this.executeJobOperation([jobId], 'stop');
-}
-```
-
-## Testing Approach
-
-### Test Cases
-
-**Unit Tests**:
-- Preprocessing service methods
-- Configuration validation
-- Error handling scenarios
-- Streaming pipeline validation
-
-**Integration Tests**:
-- End-to-end backup workflows with preprocessing
-- ZFS snapshot streaming operations
-- Docker container management with streaming
-- Flash backup streaming compression
-- Rclone daemon integration
-
-**Edge Cases**:
-- Network failures during streaming
-- ZFS snapshot creation failures
-- Docker container stop/start failures
-- Permission issues with ZFS/Docker operations
-- Malformed custom scripts
-- Streaming interruption and recovery
-
-### Acceptance Criteria
-
-1. User can select preprocessing type in backup job configuration
-2. ZFS snapshot backups stream directly to destination without local storage
-3. Flash backup streams compressed archive directly to destination
-4. Docker containers are safely stopped/started with volume data streamed
-5. Custom scripts execute with proper error handling (file-based output)
-6. All streaming operations respect timeout settings
-7. Failed preprocessing operations clean up properly (including snapshots)
-8. Job status accurately reflects preprocessing progress
-9. Streaming operations show real-time progress
-
-## Future Considerations
-
-### Scalability Plans
-- Support for multiple preprocessing steps per job
-- Parallel preprocessing for multiple backup sources
-- Preprocessing step templates and sharing
-- Advanced streaming compression algorithms
-
-### Enhancement Ideas
-- Database dump preprocessing with streaming (MySQL/PostgreSQL)
-- VM snapshot integration with streaming
-- Network share mounting/unmounting
-- Encryption preprocessing steps
-- Multi-stream parallel processing
-
-### Known Limitations
-- Custom scripts limited to file-based operations (no streaming for security)
-- ZFS operations require appropriate system permissions
-- Docker operations require Docker daemon access
-- Streaming operations require sufficient network bandwidth for real-time processing
-- Streaming failures may require full restart (no partial resume capability)
-
-## Migration from UpdateFlashBackup.php
-
-### Replacement Strategy
-
-**Local Git Repository Management**:
-- Local git repository initialization and configuration
-- Git filters and exclusions setup for proper file handling
-- Local commit operations to track configuration changes
-- Streaming backup of git repository without remote synchronization
-- No Unraid Connect authentication or remote git push operations
-
-**Simplified Approach**:
-- Focus on local git repository preparation and streaming
-- Remove dependency on Unraid Connect for backup operations
-- Maintain git history locally for configuration tracking
-- Stream entire git repository to backup destination
-- Preserve existing UpdateFlashBackup.php for users who need remote sync
-
-**Enhanced Features**:
-- Integration with comprehensive backup job system
-- Unified monitoring and status reporting
-- Streaming capabilities for faster, more efficient backups
-- Better error handling and retry logic
-- Consistent logging and debugging across all backup types
-
-**Migration Steps**:
-1. Implement local git preprocessing in backup system
-2. Add UI option to use new local flash backup method
-3. Test new system alongside existing UpdateFlashBackup.php
-4. Allow users to choose between local backup and remote sync
-5. Maintain both options for different use cases
-
-**Configuration Mapping**:
-```typescript
-// Legacy UpdateFlashBackup.php (for remote sync)
-const legacyFlashBackup = {
- command: 'update',
- commitmsg: 'Config change'
-};
-
-// New local preprocessing configuration
-const newLocalFlashBackup: BackupJobConfigData = {
- preprocessType: 'flash',
- preprocessConfig: {
- flash: {
- gitPath: '/boot/.git',
- streamDirect: true,
- commitMessage: 'Config change',
- includeGitHistory: true
- }
- },
- // ... other backup job config
-};
-
-// Alternative with local cache
-const newLocalFlashBackupWithCache: BackupJobConfigData = {
- preprocessType: 'flash',
- preprocessConfig: {
- flash: {
- gitPath: '/boot/.git',
- streamDirect: false,
- localCachePath: '/mnt/cache/flash-backup.tar',
- commitMessage: 'Config change',
- includeGitHistory: true
- }
- },
- // ... other backup job config
-};
-```
-
-**Benefits of Local Approach**:
-- No dependency on Unraid Connect for backup operations
-- Faster backup process without remote authentication
-- Unified backup system for all backup types
-- Streaming capabilities reduce local storage requirements (when streamDirect=true)
-- Local cache option for scenarios requiring intermediate storage
-- Better integration with existing backup monitoring
-- Consistent error handling and retry logic
-```
-
-**Validation Usage in Services**:
-```typescript
-import { Injectable, BadRequestException } from '@nestjs/common';
-import { validate } from 'class-validator';
-import { plainToClass } from 'class-transformer';
-
-@Injectable()
-export class BackupConfigService {
- constructor(
- private readonly validationService: PreprocessConfigValidationService
- ) {}
-
- async validateAndCreateBackupJob(input: any): Promise {
- // Transform and validate DTO
- const dto = plainToClass(BackupJobPreprocessDto, input);
- const validationErrors = await validate(dto);
-
- if (validationErrors.length > 0) {
- const errorMessages = validationErrors
- .map(error => Object.values(error.constraints || {}).join(', '))
- .join('; ');
- throw new BadRequestException(`Validation failed: ${errorMessages}`);
- }
-
- // Custom business logic validation
- const businessErrors = this.validationService.validateConfig(dto);
- if (businessErrors.length > 0) {
- throw new BadRequestException(`Configuration errors: ${businessErrors.join('; ')}`);
- }
-
- // Additional async validations
- const poolExists = await this.validationService.validateZfsPool(dto.preprocessConfig?.zfs?.poolName || '');
- if (!poolExists) {
- throw new BadRequestException(`ZFS pool '${dto.preprocessConfig?.zfs?.poolName}' does not exist`);
- }
-
- if (dto.preprocessType === PreprocessType.SCRIPT && dto.preprocessConfig?.script) {
- const scriptExists = await this.validationService.validateScriptExists(dto.preprocessConfig.script.scriptPath);
- if (!scriptExists) {
- throw new BadRequestException(`Script '${dto.preprocessConfig.script.scriptPath}' does not exist or is not executable`);
- }
- }
-
- // Convert DTO to domain model
- return this.convertDtoToModel(dto);
- }
-
- private convertDtoToModel(dto: BackupJobPreprocessDto): BackupJobConfig {
- // Implementation to convert validated DTO to internal model
- return {
- preprocessType: dto.preprocessType,
- preprocessConfig: dto.preprocessConfig,
- preprocessTimeout: dto.preprocessTimeout,
- cleanupOnFailure: dto.cleanupOnFailure
- } as BackupJobConfig;
- }
-}
-
-// GraphQL Resolver with validation
-@Resolver()
-export class BackupJobResolver {
- constructor(
- private readonly backupConfigService: BackupConfigService
- ) {}
-
- @Mutation(() => BackupJobConfig)
- async createBackupJob(
- @Args('input') input: BackupJobPreprocessInput
- ): Promise {
- return this.backupConfigService.validateAndCreateBackupJob(input);
- }
-
- @Mutation(() => BackupJobConfig)
- async updateBackupJob(
- @Args('id') id: string,
- @Args('input') input: Partial
- ): Promise {
- // Merge with existing config and validate
- const existingConfig = await this.getExistingConfig(id);
- const mergedInput = { ...existingConfig, ...input };
- return this.backupConfigService.validateAndCreateBackupJob(mergedInput);
- }
-}
-
-// Validation pipe for automatic DTO validation
-import { ValidationPipe } from '@nestjs/common';
-
-// In main.ts or module configuration
-app.useGlobalPipes(new ValidationPipe({
- transform: true,
- whitelist: true,
- forbidNonWhitelisted: true,
- validateCustomDecorators: true
-}));
-```
-
-
\ No newline at end of file
diff --git a/.bivvy/m9X4-moves.json b/.bivvy/m9X4-moves.json
deleted file mode 100644
index d52b85ee8..000000000
--- a/.bivvy/m9X4-moves.json
+++ /dev/null
@@ -1,180 +0,0 @@
-{
- "climb": "m9X4",
- "moves": [
- {
- "status": "complete",
- "description": "Create preprocessing types and validation DTOs",
- "details": "Create the core preprocessing types, enums, and validation DTOs as specified in the climb document. This includes PreprocessType enum, validation classes for ZFS, Flash, and Script configurations, and the main PreprocessConfigDto classes.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing.types.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Extend backup job data models with preprocessing fields",
- "details": "Add preprocessing fields to the BackupJobConfig GraphQL model and input types. Include preprocessType, preprocessConfig, preprocessTimeout, and cleanupOnFailure fields with proper GraphQL decorators and validation.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/backup.model.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Update BackupJobConfigData interface with preprocessing fields",
- "details": "Extend the BackupJobConfigData interface to include the new preprocessing fields and update the mapToGraphQL method to handle the new fields.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Create preprocessing validation service",
- "details": "Implement the PreprocessConfigValidationService with business logic validation, async validation for ZFS pools and scripts, and transformation methods as detailed in the climb document.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing-validation.service.ts"
- ],
- "rest": true
- },
- {
- "status": "complete",
- "description": "Create streaming job manager",
- "details": "Implement the StreamingJobManager class to handle subprocess lifecycle management, process tracking, progress monitoring, and cleanup for streaming operations like ZFS and Flash backups.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/streaming-job-manager.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Create core preprocessing service",
- "details": "Implement the main PreprocessingService with methods for executing different preprocessing types, handling streaming operations, and managing cleanup. Include the PreprocessResult interface and core execution logic.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Extend RClone API service with streaming capabilities",
- "details": "Add streaming backup methods to RCloneApiService including startStreamingBackup, streaming job tracking integration, and unified job status management for both daemon and streaming jobs.",
- "files": [
- "api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts"
- ],
- "rest": true
- },
- {
- "status": "complete",
- "description": "Create ZFS preprocessing implementation",
- "details": "Implement ZFS-specific preprocessing including snapshot creation, streaming via `zfs send | rclone rcat`, snapshot cleanup, and error handling for ZFS operations.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/zfs-preprocessing.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Create Flash backup preprocessing implementation",
- "details": "Implement Flash backup preprocessing with local git repository setup, git operations, and streaming via `tar cf - /boot/.git | rclone rcat` as detailed in the climb document.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/flash-preprocessing.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Create custom script preprocessing implementation",
- "details": "Implement custom script preprocessing with sandboxed execution, parameter passing, timeout handling, and file-based output (non-streaming for security).",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/preprocessing/script-preprocessing.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Update backup config service with preprocessing integration",
- "details": "Integrate preprocessing validation and execution into the backup config service. Update createBackupJobConfig, updateBackupJobConfig, and executeBackupJob methods to handle preprocessing.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts"
- ]
- },
- {
- "status": "complete",
- "description": "Update backup module with new services",
- "details": "Add all new preprocessing services to the BackupModule providers array and ensure proper dependency injection setup.",
- "files": [
- "api/src/unraid-api/graph/resolvers/backup/backup.module.ts"
- ],
- "rest": true
- },
- {
- "status": "complete",
- "description": "Update web GraphQL queries and fragments",
- "details": "Add preprocessing fields to the BACKUP_JOB_CONFIG_FRAGMENT and update mutations to include the new preprocessing configuration fields.",
- "files": [
- "web/components/Backup/backup-jobs.query.ts"
- ]
- },
- {
- "status": "todo",
- "description": "Create preprocessing UI components",
- "details": "Create Vue component for preprocessing configuration with dropdown for preprocessing type selection and dynamic form fields for each preprocessing type (ZFS, Flash, Script).",
- "files": [
- "web/components/Backup/PreprocessingConfig.vue"
- ]
- },
- {
- "status": "todo",
- "description": "Update backup job form component",
- "details": "Integrate the PreprocessingConfig component into the backup job form and handle preprocessing configuration state management.",
- "files": [
- "web/components/Backup/BackupJobForm.vue"
- ]
- },
- {
- "status": "todo",
- "description": "Update backup job list component",
- "details": "Add preprocessing status indicators to the backup job list and show preprocessing type and status information.",
- "files": [
- "web/components/Backup/BackupJobList.vue"
- ]
- },
- {
- "status": "todo",
- "description": "Create preprocessing status monitoring",
- "details": "Create component to display preprocessing progress, streaming status, and error messages with real-time updates.",
- "files": [
- "web/components/Backup/PreprocessingStatus.vue"
- ],
- "rest": true
- },
- {
- "status": "skip",
- "description": "Add preprocessing tests",
- "details": "Create comprehensive unit tests for all preprocessing services including validation, execution, streaming operations, and error handling scenarios.",
- "files": [
- "api/src/__test__/preprocessing/preprocessing.service.spec.ts",
- "api/src/__test__/preprocessing/zfs-preprocessing.service.spec.ts",
- "api/src/__test__/preprocessing/flash-preprocessing.service.spec.ts",
- "api/src/__test__/preprocessing/streaming-job-manager.spec.ts"
- ]
- },
- {
- "status": "skip",
- "description": "Add integration tests",
- "details": "Create integration tests for end-to-end backup workflows with preprocessing, including ZFS snapshot streaming, Flash backup streaming, and error recovery scenarios.",
- "files": [
- "api/src/__test__/backup/backup-preprocessing-integration.spec.ts"
- ]
- },
- {
- "status": "skip",
- "description": "Update documentation",
- "details": "Create comprehensive documentation for the preprocessing system including configuration examples, troubleshooting guide, and API reference.",
- "files": [
- "api/docs/backup-preprocessing.md"
- ]
- },
- {
- "status": "skip",
- "description": "Add preprocessing configuration examples",
- "details": "Provide example configurations for each preprocessing type to help users understand the configuration options and best practices.",
- "files": [
- "api/docs/examples/preprocessing-configs.json"
- ]
- }
- ]
-}
\ No newline at end of file
diff --git a/.bivvy/r5N8-climb.md b/.bivvy/r5N8-climb.md
deleted file mode 100644
index 4f9854e75..000000000
--- a/.bivvy/r5N8-climb.md
+++ /dev/null
@@ -1,326 +0,0 @@
-# Backup Source and Destination Processor Refactoring
-
-
-
- r5N8
- task
- Continue refactoring backup system to use separate source and destination processors with support for both streaming and non-streaming backups
-
-
-
- None - this is a refactoring task using existing dependencies
-
-
-
- - Flash source processor and RClone destination processor are already implemented
- - Raw source processor exists but may need updates for streaming compatibility
- - Backup service infrastructure exists but needs integration with new processor pattern
-
-
-
- - api/src/unraid-api/graph/resolvers/backup/source/flash/flash-source-processor.service.ts (already implemented)
- - api/src/unraid-api/graph/resolvers/backup/destination/rclone/rclone-destination-processor.service.ts (already implemented)
- - api/src/unraid-api/graph/resolvers/backup/source/raw/raw-source-processor.service.ts (needs streaming support)
- - api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts (needs processor integration)
- - api/src/unraid-api/graph/resolvers/backup/source/backup-source-processor.interface.ts
- - api/src/unraid-api/graph/resolvers/backup/destination/backup-destination-processor.interface.ts
- - api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts (streaming job service)
- - api/src/unraid-api/graph/resolvers/backup/backup.model.ts (backup config models)
-
-
-
-## Overview
-
-This task implements a clean backup system architecture using separate source and destination processors with support for both streaming and non-streaming backup workflows. Since this is a new system, we can implement the optimal design without backward compatibility concerns.
-
-## Current State Analysis
-
-### Already Implemented
-- **FlashSourceProcessor**: Supports streaming via tar command generation for git history inclusion
-- **RCloneDestinationProcessor**: Handles both streaming and regular uploads to RClone remotes
-- **RawSourceProcessor**: Basic implementation without streaming support
-
-### Architecture Pattern
-The processor pattern separates:
-1. **Source Processors**: Handle data preparation, validation, and streaming command generation
-2. **Destination Processors**: Handle upload/transfer logic with streaming support
-3. **Backup Service**: Orchestrates the flow between source and destination processors
-
-## Requirements
-
-### Functional Requirements
-
-#### Backup Config Simplification
-- **Main Backup Config** should only contain:
- - Job ID and name
- - Cron schedule
- - Enabled/disabled status
- - Created/updated timestamps
- - Last run metadata (status, timestamp)
-- **Source Config** should contain all source-specific configuration
-- **Destination Config** should contain all destination-specific configuration
-- Remove redundant fields from main config (remoteName, destinationPath, rcloneOptions, etc.)
-
-#### Source Processor Interface
-- All source processors must implement consistent validation
-- Streaming-capable sources should generate stream commands (command + args)
-- Non-streaming sources should provide direct file/directory paths
-- Metadata should include streaming capability flags
-
-#### Destination Processor Interface
-- Support both streaming and non-streaming inputs
-- Handle progress reporting and error handling consistently
-- Provide cleanup capabilities for failed transfers
-
-#### Backup Service Integration
-- Automatically detect streaming vs non-streaming workflows
-- Route streaming backups through streaming job service
-- Route regular backups through standard backup service
-- Maintain consistent job tracking and status reporting
-
-### Technical Requirements
-
-#### Simplified Backup Config Structure
-```typescript
-interface BackupJobConfig {
- id: string
- name: string
- schedule: string
- enabled: boolean
- sourceType: SourceType
- destinationType: DestinationType
- sourceConfig: SourceConfig // Type varies by sourceType
- destinationConfig: DestinationConfig // Type varies by destinationType
- createdAt: string
- updatedAt: string
- lastRunAt?: string
- lastRunStatus?: string
- currentJobId?: string
-}
-```
-
-#### Streaming Detection Logic
-```typescript
-if (sourceResult.streamCommand && destinationConfig.useStreaming) {
- // Use streaming workflow
- await streamingJobService.execute(sourceResult, destinationConfig)
-} else {
- // Use regular workflow
- await backupService.execute(sourceResult.outputPath, destinationConfig)
-}
-```
-
-#### Error Handling
-- Consistent error propagation between processors
-- Cleanup coordination between source and destination
-- Timeout handling for both streaming and non-streaming operations
-
-#### Progress Reporting
-- Unified progress interface across streaming and non-streaming
-- Real-time status updates for long-running operations
-- Metadata preservation throughout the pipeline
-
-## Implementation Details
-
-### Backup Config Model Refactoring
-
-#### Current Issues
-- Main config contains source-specific fields (sourceConfig with nested type-specific configs)
-- Main config contains destination-specific fields (remoteName, destinationPath, rcloneOptions)
-- Mixed concerns make the config complex and hard to extend
-
-#### New Structure
-```typescript
-// Simplified main config
-interface BackupJobConfig {
- id: string
- name: string
- schedule: string
- enabled: boolean
- sourceType: SourceType
- destinationType: DestinationType
- sourceConfig: FlashSourceConfig | RawSourceConfig | ZfsSourceConfig | ScriptSourceConfig
- destinationConfig: RCloneDestinationConfig | LocalDestinationConfig
- createdAt: string
- updatedAt: string
- lastRunAt?: string
- lastRunStatus?: string
- currentJobId?: string
-}
-
-// Source configs contain all source-specific settings
-interface RCloneDestinationConfig {
- remoteName: string
- remotePath: string
- transferOptions?: Record
- useStreaming?: boolean
- timeout: number
- cleanupOnFailure: boolean
-}
-```
-
-### Source Processor Updates Needed
-
-#### Raw Source Processor Enhancements
-- Add streaming command generation for tar-based compression
-- Implement include/exclude pattern handling in stream commands
-- Add metadata flags for streaming capability
-- Support both streaming and non-streaming modes
-
-#### ZFS Source Processor (Future)
-- Will need streaming support for ZFS snapshot transfers
-- Should generate appropriate zfs send commands
-- Handle incremental vs full backup streaming
-
-#### Script Source Processor (Future)
-- Execute custom scripts and stream their output
-- Handle script validation and execution environment
-- Support both file output and streaming output modes
-
-### Backup Service Orchestration
-
-#### Workflow Detection
-```typescript
-async executeBackup(config: BackupJobConfig): Promise {
- const sourceProcessor = this.getSourceProcessor(config.sourceType)
- const destinationProcessor = this.getDestinationProcessor(config.destinationType)
-
- const sourceResult = await sourceProcessor.execute(config.sourceConfig)
-
- if (sourceResult.streamCommand && destinationProcessor.supportsStreaming) {
- return this.executeStreamingBackup(sourceResult, config.destinationConfig)
- } else {
- return this.executeRegularBackup(sourceResult, config.destinationConfig)
- }
-}
-```
-
-#### Job Management Integration
-- Update backup-config.service.ts to use processor pattern
-- Maintain existing cron scheduling functionality
-- Preserve job status tracking and metadata storage
-- Handle processor-specific cleanup requirements
-
-### Interface Standardization
-
-#### BackupSourceResult Enhancement
-```typescript
-interface BackupSourceResult {
- success: boolean
- outputPath?: string
- streamPath?: string // For streaming sources
- streamCommand?: string
- streamArgs?: string[]
- metadata: Record
- cleanupRequired?: boolean
- error?: string
-}
-```
-
-#### BackupDestinationConfig Enhancement
-```typescript
-interface BackupDestinationConfig {
- timeout: number
- cleanupOnFailure: boolean
- useStreaming?: boolean
- supportsStreaming?: boolean
- // destination-specific config
-}
-```
-
-## Implementation Strategy
-
-### Core Implementation Tasks
-1. **Refactor Backup Config Models** - Simplify main config and move specific settings to source/destination configs
-2. **Update Raw Source Processor** - Add streaming support with tar command generation
-3. **Create Backup Orchestration Service** - Implement workflow detection and processor coordination
-4. **Update Backup Config Service** - Integrate with new processor pattern and simplified config structure
-5. **Update GraphQL Schema** - Reflect new config structure in API
-6. **Add Comprehensive Testing** - Unit and integration tests for all workflows
-
-### Backup Config Refactoring
-- Remove source-specific fields from main BackupJobConfig
-- Remove destination-specific fields from main BackupJobConfig
-- Create proper TypeScript union types for sourceConfig and destinationConfig
-- Update GraphQL input/output types to match new structure
-- Migrate any existing config data to new structure
-
-### Backup Orchestration Service
-Create a new service that:
-- Manages source and destination processor instances
-- Implements streaming vs non-streaming workflow detection
-- Handles job execution coordination
-- Manages cleanup and error handling
-- Provides unified progress reporting
-
-### Updated Raw Source Processor
-Enhance to support:
-- Streaming tar command generation similar to Flash processor
-- Include/exclude pattern handling in tar commands
-- Metadata flags indicating streaming capability
-- Both streaming and direct file path modes
-
-## Testing Strategy
-
-### Unit Tests
-- Test each processor independently with mock dependencies
-- Validate streaming command generation
-- Test error handling and cleanup scenarios
-- Verify metadata preservation
-- Test config model validation and transformation
-
-### Integration Tests
-- Test complete backup workflows (source → destination)
-- Validate streaming vs non-streaming path selection
-- Test job management and status tracking
-- Verify cleanup coordination
-- Test GraphQL API with new config structure
-
-### Edge Cases
-- Network failures during streaming uploads
-- Source preparation failures with cleanup requirements
-- Mixed streaming/non-streaming configurations
-- Large file handling and timeout scenarios
-- Invalid config combinations
-
-## Success Criteria
-
-### Functional Success
-- Flash backups use streaming when appropriate
-- Raw backups can use either streaming or non-streaming based on configuration
-- Job scheduling and status tracking work correctly
-- All backup types execute successfully
-- Clean separation of concerns in config structure
-
-### Technical Success
-- Clean separation between source and destination concerns
-- Consistent error handling and cleanup across all processors
-- Efficient streaming for large backups
-- Maintainable and extensible processor architecture
-- Simplified and logical config structure
-
-### Performance Success
-- Streaming backups show improved memory usage for large datasets
-- Proper timeout handling prevents hung jobs
-- Resource cleanup prevents memory leaks
-- Fast execution for both streaming and non-streaming workflows
-
-## Future Considerations
-
-### Additional Source Types
-- ZFS snapshot streaming
-- Database dump streaming
-- Custom script output streaming
-- Docker container backup streaming
-
-### Enhanced Destination Support
-- Multiple destination targets
-- Destination validation and health checks
-- Bandwidth throttling and QoS
-- Encryption at destination level
-
-### Monitoring and Observability
-- Detailed metrics for streaming vs non-streaming performance
-- Progress tracking granularity improvements
-- Error categorization and alerting
-- Resource usage monitoring per backup type
\ No newline at end of file
diff --git a/.bivvy/r5N8-moves.json b/.bivvy/r5N8-moves.json
deleted file mode 100644
index d4d5b37a2..000000000
--- a/.bivvy/r5N8-moves.json
+++ /dev/null
@@ -1,132 +0,0 @@
-{
- "Climb": "r5N8",
- "moves": [
- {
- "status": "done",
- "description": "Examine current backup config structure and interfaces",
- "details": "Review backup.model.ts, backup-config.service.ts, and GraphQL schema to understand current structure. Document what needs to be changed for the config simplification."
- },
- {
- "status": "done",
- "description": "Create new backup config interfaces",
- "details": "Define simplified BackupJobConfig interface with only job-level fields (id, name, schedule, enabled, timestamps). Create union types for sourceConfig and destinationConfig. Update backup.model.ts with new interfaces.",
- "rest": true
- },
- {
- "status": "done",
- "description": "Update source processor interfaces for streaming support",
- "details": "Enhance BackupSourceResult interface to include streamCommand, streamArgs, and streaming capability metadata. Update backup-source-processor.interface.ts to support both streaming and non-streaming workflows."
- },
- {
- "status": "done",
- "description": "Update destination processor interfaces",
- "details": "Enhance BackupDestinationConfig interface with useStreaming and supportsStreaming flags. Update backup-destination-processor.interface.ts to handle both streaming and regular backup inputs."
- },
- {
- "status": "done",
- "description": "Add streaming support to Raw Source Processor",
- "details": "Update raw-source-processor.service.ts to generate tar commands for streaming backups. Add include/exclude pattern handling in tar command generation. Add metadata flags for streaming capability. Maintain support for direct file path mode.",
- "rest": true
- },
- {
- "status": "done",
- "description": "Create Backup Orchestration Service",
- "details": "Create new backup-orchestration.service.ts that manages source and destination processor instances. Implement workflow detection logic (streaming vs non-streaming). Handle job execution coordination between processors."
- },
- {
- "status": "todo",
- "description": "Implement streaming workflow execution in orchestration service",
- "details": "Add executeStreamingBackup method that coordinates source streaming commands with destination streaming uploads. Handle progress reporting, error handling, and cleanup coordination for streaming workflows."
- },
- {
- "status": "todo",
- "description": "Implement regular workflow execution in orchestration service",
- "details": "Add executeRegularBackup method for non-streaming workflows. Handle file-based transfers from source output to destination. Implement consistent error handling and cleanup."
- },
- {
- "status": "todo",
- "description": "Update backup-config.service.ts to use new config structure",
- "details": "Refactor createBackupJobConfig and updateBackupJobConfig methods to work with simplified config structure. Remove handling of source/destination specific fields from main config. Update validation logic.",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Integrate orchestration service into backup-config.service.ts",
- "details": "Replace direct rclone service calls in executeBackupJob with orchestration service. Update job execution to use processor pattern. Maintain existing cron scheduling and job tracking functionality."
- },
- {
- "status": "todo",
- "description": "Update GraphQL schema for new config structure",
- "details": "Update backup GraphQL types to reflect simplified BackupJobConfig structure. Create separate input types for different source and destination configs. Update mutations and queries to handle new structure."
- },
- {
- "status": "todo",
- "description": "Update backup JSON forms configuration",
- "details": "Refactor backup-jsonforms-config.ts to remove destination-specific fields (remoteName, destinationPath, rcloneOptions) from basic config. Create separate destination config section. Reorganize form steps to separate job config, source config, and destination config clearly.",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Create source config factory/registry",
- "details": "Create a service to manage source processor instances by type. Implement getSourceProcessor method that returns appropriate processor based on sourceType. Handle processor dependency injection and lifecycle."
- },
- {
- "status": "todo",
- "description": "Create destination config factory/registry",
- "details": "Create a service to manage destination processor instances by type. Implement getDestinationProcessor method that returns appropriate processor based on destinationType. Handle processor dependency injection and lifecycle."
- },
- {
- "status": "todo",
- "description": "Add comprehensive error handling and cleanup coordination",
- "details": "Implement consistent error propagation between source and destination processors. Add cleanup coordination when either source or destination fails. Handle timeout scenarios for both streaming and non-streaming operations.",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Add progress reporting interface",
- "details": "Create unified progress reporting interface that works for both streaming and non-streaming workflows. Implement real-time status updates. Ensure metadata preservation throughout the pipeline."
- },
- {
- "status": "todo",
- "description": "Write unit tests for Raw Source Processor streaming",
- "details": "Test streaming command generation with various include/exclude patterns. Test metadata flags and streaming capability detection. Test error handling and cleanup scenarios. Mock dependencies appropriately."
- },
- {
- "status": "todo",
- "description": "Write unit tests for Backup Orchestration Service",
- "details": "Test workflow detection logic (streaming vs non-streaming). Test source and destination processor coordination. Test error handling and cleanup coordination. Mock all processor dependencies."
- },
- {
- "status": "todo",
- "description": "Write unit tests for updated backup-config.service.ts",
- "details": "Test config creation and updates with new structure. Test validation of source and destination configs. Test job execution with orchestration service. Test cron scheduling functionality."
- },
- {
- "status": "todo",
- "description": "Write integration tests for complete backup workflows",
- "details": "Test Flash source with RClone destination (streaming). Test Raw source with RClone destination (both streaming and non-streaming). Test job management and status tracking. Test cleanup coordination.",
- "rest": true
- },
- {
- "status": "todo",
- "description": "Test edge cases and error scenarios",
- "details": "Test network failures during streaming uploads. Test source preparation failures with cleanup requirements. Test invalid config combinations. Test large file handling and timeout scenarios."
- },
- {
- "status": "todo",
- "description": "Update existing backup configs to new structure",
- "details": "Create migration logic to convert any existing backup configs to new simplified structure. Move source-specific and destination-specific fields to appropriate sub-configs. Test migration with existing data."
- },
- {
- "status": "todo",
- "description": "Performance testing and optimization",
- "details": "Test streaming vs non-streaming performance with large datasets. Verify memory usage improvements for streaming backups. Test timeout handling and resource cleanup. Benchmark execution times for different backup types."
- },
- {
- "status": "todo",
- "description": "Documentation and final validation",
- "details": "Document new config structure and processor architecture. Create examples for different backup configurations. Validate all backup types work correctly. Ensure clean separation of concerns achieved.",
- "rest": true
- }
- ]
-}
\ No newline at end of file
diff --git a/.bivvy/x7K9-climb.md b/.bivvy/x7K9-climb.md
deleted file mode 100644
index 1347861ec..000000000
--- a/.bivvy/x7K9-climb.md
+++ /dev/null
@@ -1,184 +0,0 @@
-**STARTFILE x7K9-climb.md**
-
-
- x7K9
- feature
- Enhanced Backup Job Management System with disable/enable controls, manual triggering, and real-time progress monitoring
-
- No new external dependencies expected - leveraging existing GraphQL subscriptions infrastructure
- None - building on existing backup system architecture
-
- - web/components/Backup/BackupJobConfig.vue (main UI component)
- - web/components/Backup/backup-jobs.query.ts (GraphQL queries/mutations)
- - api/src/unraid-api/graph/resolvers/backup/backup.resolver.ts (GraphQL resolver)
- - api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts (business logic)
- - api/src/unraid-api/graph/resolvers/backup/backup.model.ts (GraphQL schema types)
-
-
- ## Feature Overview
- Enhance the existing backup job management system to provide better control and monitoring capabilities for users managing their backup operations.
-
- ## Purpose Statement
- Users need granular control over their backup jobs with the ability to enable/disable individual jobs, manually trigger scheduled jobs on-demand, and monitor real-time progress of running backup operations.
-
- ## Problem Being Solved
- - Users cannot easily disable/enable individual backup jobs without deleting them
- - No way to manually trigger a scheduled backup job outside its schedule
- - No real-time visibility into backup job progress once initiated
- - Limited feedback on current backup operation status
-
- ## Success Metrics
- - Users can toggle backup jobs on/off without losing configuration
- - Users can manually trigger any configured backup job
- - Real-time progress updates for active backup operations
- - Improved user experience with immediate feedback
-
- ## Functional Requirements
-
- ### Job Control
- - Toggle individual backup jobs enabled/disabled state
- - Manual trigger functionality for any configured backup job
- - Preserve all job configuration when disabling
- - Visual indicators for job state (enabled/disabled/running)
-
- ### Progress Monitoring
- - Real-time subscription for backup job progress
- - Display progress percentage, speed, ETA, and transferred data
- - Show currently running jobs in the UI
- - Update job status in real-time without page refresh
-
- ### UI Enhancements
- - Add enable/disable toggle controls to job cards
- - Add "Run Now" button for manual triggering
- - Progress indicators and status updates
- - Better visual feedback for job states
-
- ## Technical Requirements
-
- ### GraphQL API
- - Add mutation for enabling/disabling backup job configs
- - Add mutation for manually triggering backup jobs by config ID
- - Add subscription for real-time backup job progress updates
- - Extend existing BackupJob type with progress fields
-
- ### Backend Services
- - Enhance BackupConfigService with enable/disable functionality
- - Add manual trigger capability that uses existing job configs
- - Implement subscription resolver for real-time updates
- - Ensure proper error handling and status reporting
-
- ### Frontend Implementation
- - Add toggle controls to BackupJobConfig.vue
- - Implement manual trigger buttons
- - Subscribe to progress updates and display in UI
- - Handle loading states and error conditions
-
- ## User Flow
-
- ### Disabling a Job
- 1. User views backup job list
- 2. User clicks toggle to disable a job
- 3. Job status updates immediately
- 4. Scheduled execution stops, configuration preserved
-
- ### Manual Triggering
- 1. User clicks "Run Now" on any configured job
- 2. System validates job configuration
- 3. Backup initiates immediately
- 4. User sees real-time progress updates
-
- ### Progress Monitoring
- 1. User initiates backup (scheduled or manual)
- 2. Progress subscription automatically activates
- 3. Real-time updates show in UI
- 4. Completion status updates when job finishes
-
- ## API Specifications
-
- ### New Mutations (Nested Pattern)
- Following the established pattern from ArrayMutations, create BackupMutations:
- ```graphql
- type BackupMutations {
- toggleJobConfig(id: String!, enabled: Boolean!): BackupJobConfig
- triggerJob(configId: String!): BackupStatus
- }
- ```
-
- ### Implementation Structure
- - Create `BackupMutationsResolver` class similar to `ArrayMutationsResolver`
- - Use `@ResolveField()` decorators instead of `@Mutation()`
- - Add appropriate `@UsePermissions()` decorators
- - Group all backup-related mutations under `BackupMutations` type
-
- ### New Subscription
- ```graphql
- backupJobProgress(jobId: String): BackupJob
- ```
-
- ### Enhanced Types
- - Extend BackupJob with progress percentage
- - Add jobConfigId reference to running jobs
- - Include more detailed status information
-
- ### Frontend GraphQL Usage
- ```graphql
- mutation ToggleBackupJob($id: String!, $enabled: Boolean!) {
- backup {
- toggleJobConfig(id: $id, enabled: $enabled) {
- id
- enabled
- updatedAt
- }
- }
- }
-
- mutation TriggerBackupJob($configId: String!) {
- backup {
- triggerJob(configId: $configId) {
- status
- jobId
- }
- }
- }
- ```
-
- ## Implementation Considerations
-
- ### Real-time Updates
- - Use existing GraphQL subscription infrastructure
- - Efficient polling of rclone API for progress data
- - Proper cleanup of subscriptions when jobs complete
-
- ### State Management
- - Update job configs atomically
- - Handle concurrent operations gracefully
- - Maintain consistency between scheduled and manual executions
-
- ### Error Handling
- - Validate job configs before manual triggering
- - Graceful degradation if progress updates fail
- - Clear error messages for failed operations
-
- ## Testing Approach
-
- ### Test Cases
- - Toggle job enabled/disabled state
- - Manual trigger of backup jobs
- - Real-time progress subscription functionality
- - Error handling for invalid operations
- - Concurrent job execution scenarios
-
- ### Acceptance Criteria
- - Jobs can be disabled/enabled without data loss
- - Manual triggers work for all valid job configurations
- - Progress updates are accurate and timely
- - UI responds appropriately to all state changes
- - No memory leaks from subscription management
-
- ## Future Considerations
- - Job scheduling modification (change cron without recreate)
- - Backup job templates and bulk operations
- - Advanced progress details (file-level progress)
- - Job history and logging improvements
-
-**ENDFILE**
\ No newline at end of file
diff --git a/.bivvy/x7K9-moves.json b/.bivvy/x7K9-moves.json
deleted file mode 100644
index bf6a60f34..000000000
--- a/.bivvy/x7K9-moves.json
+++ /dev/null
@@ -1,63 +0,0 @@
-{
- "Climb": "x7K9",
- "moves": [
- {
- "status": "done",
- "description": "Create BackupMutations GraphQL type and resolver structure",
- "details": "Add BackupMutations type to backup.model.ts, create backup-mutations.resolver.ts file, and move existing mutations (createBackupJobConfig, updateBackupJobConfig, deleteBackupJobConfig, initiateBackup) from BackupResolver to the new BackupMutationsResolver following the ArrayMutationsResolver pattern"
- },
- {
- "status": "done",
- "description": "Implement toggleJobConfig mutation",
- "details": "Add toggleJobConfig resolver method with proper permissions and update BackupConfigService to handle enable/disable functionality"
- },
- {
- "status": "done",
- "description": "Implement triggerJob mutation",
- "details": "Add triggerJob resolver method that manually triggers a backup job using existing config, with validation and error handling"
- },
- {
- "status": "done",
- "description": "Add backupJobProgress subscription",
- "details": "Create GraphQL subscription resolver for real-time backup job progress updates using existing rclone API polling",
- "rest": true
- },
- {
- "status": "done",
- "description": "Enhance BackupJob type with progress fields",
- "details": "Add progress percentage, configId reference, and detailed status fields to BackupJob model"
- },
- {
- "status": "done",
- "description": "Update frontend GraphQL queries and mutations",
- "details": "Add new mutations and subscription to backup-jobs.query.ts following the nested mutation pattern"
- },
- {
- "status": "done",
- "description": "Add toggle controls to BackupJobConfig.vue",
- "details": "Add enable/disable toggle switches to each job card with proper state management and error handling"
- },
- {
- "status": "done",
- "description": "Add manual trigger buttons to BackupJobConfig.vue",
- "details": "Add 'Run Now' buttons with loading states and trigger the new mutation",
- "rest": true
- },
- {
- "status": "done",
- "description": "Implement progress monitoring in the UI",
- "details": "Subscribe to backup job progress and display real-time updates in the job cards with progress bars and status"
- },
- {
- "status": "done",
- "description": "Add visual indicators for job states",
- "details": "Enhance job cards with better status indicators for enabled/disabled/running states and improve overall UX"
- },
- {
- "status": "todo",
- "description": "Test integration and error handling",
- "details": "Test all functionality including edge cases, error scenarios, and subscription cleanup",
- "rest": true
- }
- ]
-}
\ No newline at end of file