mirror of
https://github.com/unraid/api.git
synced 2026-01-08 17:49:59 -06:00
chore: remove .bivvy
This commit is contained in:
@@ -1,8 +0,0 @@
|
||||
---
|
||||
id: abcd
|
||||
type: feature
|
||||
description: This is an example Climb
|
||||
---
|
||||
## Example PRD
|
||||
|
||||
TODO
|
||||
@@ -1,21 +0,0 @@
|
||||
{
|
||||
"climb": "0000",
|
||||
"moves": [
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "install the dependencies",
|
||||
"details": "install the deps listed as New Dependencies"
|
||||
}, {
|
||||
"status": "skip",
|
||||
"description": "Write tests"
|
||||
}, {
|
||||
"status": "climbing",
|
||||
"description": "Build the first part of the feature",
|
||||
"rest": "true"
|
||||
}, {
|
||||
"status": "todo",
|
||||
"description": "Build the last part of the feature",
|
||||
"details": "After this, you'd ask the user if they want to return to write tests"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,139 +0,0 @@
|
||||
**STARTFILE k8P2-climb.md**
|
||||
<Climb>
|
||||
<header>
|
||||
<id>k8P2</id>
|
||||
<type>bug</type>
|
||||
<description>Fix RClone backup jobs not appearing in jobs list and missing status data</description>
|
||||
</header>
|
||||
<newDependencies>None - this is a bug fix for existing functionality</newDependencies>
|
||||
<prerequisitChanges>None - working with existing backup service implementation</prerequisitChanges>
|
||||
<relevantFiles>
|
||||
- api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts (main RClone API service)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup-mutations.resolver.ts (backup mutations)
|
||||
- web/components/Backup/BackupOverview.vue (frontend backup overview)
|
||||
- web/components/Backup/backup-jobs.query.ts (GraphQL query for jobs)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup-queries.resolver.ts (backup queries resolver)
|
||||
</relevantFiles>
|
||||
<everythingElse>
|
||||
## Problem Statement
|
||||
|
||||
The newly implemented backup service has two critical issues:
|
||||
1. **Jobs not appearing in non-system jobs list**: When users trigger backup jobs via the "Run Now" button in BackupOverview.vue, these jobs are not showing up in the jobs list query, even when `showSystemJobs: false`
|
||||
2. **Missing job status data**: Jobs that are started don't return proper status information, making it impossible to track backup progress
|
||||
|
||||
## Background
|
||||
|
||||
This issue emerged immediately after implementing the new backup service. The backup functionality uses:
|
||||
- RClone RC daemon for job execution via Unix socket
|
||||
- GraphQL mutations for triggering backups (`triggerJob`, `initiateBackup`)
|
||||
- Job grouping system with groups like `backup/manual` and `backup/${id}`
|
||||
- Vue.js frontend with real-time job status monitoring
|
||||
|
||||
## Root Cause Analysis Areas
|
||||
|
||||
### 1. Job Group Classification
|
||||
The current implementation sets job groups as:
|
||||
- `backup/manual` for manual backups
|
||||
- `backup/${id}` for configured job backups
|
||||
|
||||
**Potential Issue**: The jobs query may be filtering these groups incorrectly, classifying user-initiated backups as "system jobs"
|
||||
|
||||
### 2. RClone API Response Handling
|
||||
**Potential Issue**: The `startBackup` method may not be properly handling or returning job metadata from RClone RC API responses
|
||||
|
||||
### 3. Job Status Synchronization
|
||||
**Potential Issue**: There may be a disconnect between job initiation and the jobs listing/status APIs
|
||||
|
||||
### 4. Logging Deficiency
|
||||
**Current Gap**: Insufficient logging around RClone API responses makes debugging difficult
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
### Enhanced Logging
|
||||
- Add comprehensive debug logging for all RClone API calls and responses
|
||||
- Log job initiation parameters and returned job metadata
|
||||
- Log job listing and filtering logic
|
||||
- Add structured logging for job group classification
|
||||
|
||||
### Job Classification Fix
|
||||
- Ensure user-initiated backup jobs are properly classified as non-system jobs
|
||||
- Review and fix job group filtering logic in the jobs query resolver
|
||||
- Validate that job groups `backup/manual` and `backup/${id}` are treated as non-system
|
||||
|
||||
### Status Data Flow
|
||||
- Verify job ID propagation from RClone startBackup response
|
||||
- Ensure job status API correctly retrieves and formats status data
|
||||
- Fix any data transformation issues between RClone API and GraphQL responses
|
||||
|
||||
### Data Model Consistency
|
||||
- Ensure BackupJob GraphQL type includes all necessary fields (note: current linter error shows missing 'type' field)
|
||||
- Verify job data structure consistency between API and frontend
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Primary Fixes
|
||||
1. **Jobs Visibility**: User-triggered backup jobs appear in the jobs list when `showSystemJobs: false`
|
||||
2. **Status Data**: Job status data (progress, speed, ETA, etc.) is properly retrieved and displayed
|
||||
3. **Job ID Tracking**: Job IDs are properly returned and can be used for status queries
|
||||
|
||||
### Secondary Improvements
|
||||
4. **Enhanced Logging**: Comprehensive logging for debugging RClone interactions
|
||||
5. **Type Safety**: Fix TypeScript/linting errors in BackupOverview.vue
|
||||
6. **System Jobs Investigation**: Document findings about excessive system jobs
|
||||
|
||||
## Testing Approach
|
||||
|
||||
### Manual Testing
|
||||
1. Trigger backup via "Run Now" button in BackupOverview.vue
|
||||
2. Verify job appears in running jobs list (with showSystemJobs: false)
|
||||
3. Confirm job status data displays correctly (progress, speed, etc.)
|
||||
4. Test both `triggerJob` (configured jobs) and `initiateBackup` (manual jobs) flows
|
||||
|
||||
### API Testing
|
||||
1. Verify RClone API responses contain expected job metadata
|
||||
2. Test job listing API with various group filters
|
||||
3. Validate job status API returns complete data
|
||||
|
||||
### Edge Cases
|
||||
1. Test behavior when RClone daemon is restarted
|
||||
2. Test concurrent backup jobs
|
||||
3. Test backup job cancellation/completion scenarios
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Phase 1: Debugging & Logging
|
||||
- Add comprehensive logging to RClone API service
|
||||
- Log all API responses and job metadata
|
||||
- Add logging to job filtering logic
|
||||
|
||||
### Phase 2: Job Classification Fix
|
||||
- Fix job group filtering in backup queries resolver
|
||||
- Ensure proper non-system job classification
|
||||
- Test job visibility in frontend
|
||||
|
||||
### Phase 3: Status Data Fix
|
||||
- Fix job status data retrieval and formatting
|
||||
- Ensure complete job metadata is available
|
||||
- Fix TypeScript/GraphQL type issues
|
||||
|
||||
### Phase 4: Validation & Testing
|
||||
- Comprehensive testing of backup job lifecycle
|
||||
- Validate all acceptance criteria
|
||||
- Document system jobs investigation findings
|
||||
|
||||
## Security Considerations
|
||||
- Ensure logging doesn't expose sensitive backup configuration data
|
||||
- Maintain proper authentication/authorization for backup operations
|
||||
- Validate that job status queries don't leak information between users
|
||||
|
||||
## Performance Considerations
|
||||
- Ensure logging doesn't significantly impact performance
|
||||
- Optimize job listing queries if necessary
|
||||
- Consider caching strategies for frequently accessed job data
|
||||
|
||||
## Known Constraints
|
||||
- Must work with existing RClone RC daemon setup
|
||||
- Cannot break existing backup functionality during fixes
|
||||
- Must maintain backward compatibility with existing backup configurations
|
||||
</Climb>
|
||||
**ENDFILE**
|
||||
@@ -1,53 +0,0 @@
|
||||
{
|
||||
"Climb": "k8P2",
|
||||
"moves": [
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Investigate current backup jobs query resolver implementation",
|
||||
"details": "Find and examine the backup-queries.resolver.ts to understand how jobs are currently filtered and what determines system vs non-system jobs"
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Add enhanced logging to RClone API service",
|
||||
"details": "Add comprehensive debug logging to startBackup, listRunningJobs, and getJobStatus methods in rclone-api.service.ts to capture API responses and job metadata"
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Add logging to job filtering logic",
|
||||
"details": "Add logging to the backup jobs query resolver to understand how jobs are being classified and filtered",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Fix job group classification in backup queries resolver",
|
||||
"details": "Ensure that jobs with groups 'backup/manual' and 'backup/{id}' are properly classified as non-system jobs"
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Verify job ID propagation from RClone responses",
|
||||
"details": "Ensure that job IDs returned from RClone startBackup are properly captured and returned in GraphQL mutations"
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Fix job status data retrieval and formatting",
|
||||
"details": "Ensure getJobStatus properly retrieves and formats all status data (progress, speed, ETA, etc.) for display in the frontend",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Fix TypeScript errors in BackupOverview.vue",
|
||||
"details": "Add missing 'type' field to BackupJob GraphQL type and fix any other type inconsistencies"
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Test job visibility and status data end-to-end",
|
||||
"details": "Manually test triggering backup jobs via 'Run Now' button and verify they appear in jobs list with proper status data",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Document system jobs investigation findings",
|
||||
"details": "Investigate why there are many system jobs running and document findings for potential future work"
|
||||
}
|
||||
]
|
||||
}
|
||||
1412
.bivvy/m9X4-climb.md
1412
.bivvy/m9X4-climb.md
File diff suppressed because it is too large
Load Diff
@@ -1,180 +0,0 @@
|
||||
{
|
||||
"climb": "m9X4",
|
||||
"moves": [
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create preprocessing types and validation DTOs",
|
||||
"details": "Create the core preprocessing types, enums, and validation DTOs as specified in the climb document. This includes PreprocessType enum, validation classes for ZFS, Flash, and Script configurations, and the main PreprocessConfigDto classes.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing.types.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Extend backup job data models with preprocessing fields",
|
||||
"details": "Add preprocessing fields to the BackupJobConfig GraphQL model and input types. Include preprocessType, preprocessConfig, preprocessTimeout, and cleanupOnFailure fields with proper GraphQL decorators and validation.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/backup.model.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Update BackupJobConfigData interface with preprocessing fields",
|
||||
"details": "Extend the BackupJobConfigData interface to include the new preprocessing fields and update the mapToGraphQL method to handle the new fields.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create preprocessing validation service",
|
||||
"details": "Implement the PreprocessConfigValidationService with business logic validation, async validation for ZFS pools and scripts, and transformation methods as detailed in the climb document.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing-validation.service.ts"
|
||||
],
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create streaming job manager",
|
||||
"details": "Implement the StreamingJobManager class to handle subprocess lifecycle management, process tracking, progress monitoring, and cleanup for streaming operations like ZFS and Flash backups.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/streaming-job-manager.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create core preprocessing service",
|
||||
"details": "Implement the main PreprocessingService with methods for executing different preprocessing types, handling streaming operations, and managing cleanup. Include the PreprocessResult interface and core execution logic.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/preprocessing.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Extend RClone API service with streaming capabilities",
|
||||
"details": "Add streaming backup methods to RCloneApiService including startStreamingBackup, streaming job tracking integration, and unified job status management for both daemon and streaming jobs.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts"
|
||||
],
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create ZFS preprocessing implementation",
|
||||
"details": "Implement ZFS-specific preprocessing including snapshot creation, streaming via `zfs send | rclone rcat`, snapshot cleanup, and error handling for ZFS operations.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/zfs-preprocessing.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create Flash backup preprocessing implementation",
|
||||
"details": "Implement Flash backup preprocessing with local git repository setup, git operations, and streaming via `tar cf - /boot/.git | rclone rcat` as detailed in the climb document.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/flash-preprocessing.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Create custom script preprocessing implementation",
|
||||
"details": "Implement custom script preprocessing with sandboxed execution, parameter passing, timeout handling, and file-based output (non-streaming for security).",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/preprocessing/script-preprocessing.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Update backup config service with preprocessing integration",
|
||||
"details": "Integrate preprocessing validation and execution into the backup config service. Update createBackupJobConfig, updateBackupJobConfig, and executeBackupJob methods to handle preprocessing.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Update backup module with new services",
|
||||
"details": "Add all new preprocessing services to the BackupModule providers array and ensure proper dependency injection setup.",
|
||||
"files": [
|
||||
"api/src/unraid-api/graph/resolvers/backup/backup.module.ts"
|
||||
],
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "complete",
|
||||
"description": "Update web GraphQL queries and fragments",
|
||||
"details": "Add preprocessing fields to the BACKUP_JOB_CONFIG_FRAGMENT and update mutations to include the new preprocessing configuration fields.",
|
||||
"files": [
|
||||
"web/components/Backup/backup-jobs.query.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Create preprocessing UI components",
|
||||
"details": "Create Vue component for preprocessing configuration with dropdown for preprocessing type selection and dynamic form fields for each preprocessing type (ZFS, Flash, Script).",
|
||||
"files": [
|
||||
"web/components/Backup/PreprocessingConfig.vue"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update backup job form component",
|
||||
"details": "Integrate the PreprocessingConfig component into the backup job form and handle preprocessing configuration state management.",
|
||||
"files": [
|
||||
"web/components/Backup/BackupJobForm.vue"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update backup job list component",
|
||||
"details": "Add preprocessing status indicators to the backup job list and show preprocessing type and status information.",
|
||||
"files": [
|
||||
"web/components/Backup/BackupJobList.vue"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Create preprocessing status monitoring",
|
||||
"details": "Create component to display preprocessing progress, streaming status, and error messages with real-time updates.",
|
||||
"files": [
|
||||
"web/components/Backup/PreprocessingStatus.vue"
|
||||
],
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "skip",
|
||||
"description": "Add preprocessing tests",
|
||||
"details": "Create comprehensive unit tests for all preprocessing services including validation, execution, streaming operations, and error handling scenarios.",
|
||||
"files": [
|
||||
"api/src/__test__/preprocessing/preprocessing.service.spec.ts",
|
||||
"api/src/__test__/preprocessing/zfs-preprocessing.service.spec.ts",
|
||||
"api/src/__test__/preprocessing/flash-preprocessing.service.spec.ts",
|
||||
"api/src/__test__/preprocessing/streaming-job-manager.spec.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "skip",
|
||||
"description": "Add integration tests",
|
||||
"details": "Create integration tests for end-to-end backup workflows with preprocessing, including ZFS snapshot streaming, Flash backup streaming, and error recovery scenarios.",
|
||||
"files": [
|
||||
"api/src/__test__/backup/backup-preprocessing-integration.spec.ts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "skip",
|
||||
"description": "Update documentation",
|
||||
"details": "Create comprehensive documentation for the preprocessing system including configuration examples, troubleshooting guide, and API reference.",
|
||||
"files": [
|
||||
"api/docs/backup-preprocessing.md"
|
||||
]
|
||||
},
|
||||
{
|
||||
"status": "skip",
|
||||
"description": "Add preprocessing configuration examples",
|
||||
"details": "Provide example configurations for each preprocessing type to help users understand the configuration options and best practices.",
|
||||
"files": [
|
||||
"api/docs/examples/preprocessing-configs.json"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,326 +0,0 @@
|
||||
# Backup Source and Destination Processor Refactoring
|
||||
|
||||
<Climb>
|
||||
<header>
|
||||
<id>r5N8</id>
|
||||
<type>task</type>
|
||||
<description>Continue refactoring backup system to use separate source and destination processors with support for both streaming and non-streaming backups</description>
|
||||
</header>
|
||||
|
||||
<newDependencies>
|
||||
None - this is a refactoring task using existing dependencies
|
||||
</newDependencies>
|
||||
|
||||
<prerequisiteChanges>
|
||||
- Flash source processor and RClone destination processor are already implemented
|
||||
- Raw source processor exists but may need updates for streaming compatibility
|
||||
- Backup service infrastructure exists but needs integration with new processor pattern
|
||||
</prerequisiteChanges>
|
||||
|
||||
<relevantFiles>
|
||||
- api/src/unraid-api/graph/resolvers/backup/source/flash/flash-source-processor.service.ts (already implemented)
|
||||
- api/src/unraid-api/graph/resolvers/backup/destination/rclone/rclone-destination-processor.service.ts (already implemented)
|
||||
- api/src/unraid-api/graph/resolvers/backup/source/raw/raw-source-processor.service.ts (needs streaming support)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts (needs processor integration)
|
||||
- api/src/unraid-api/graph/resolvers/backup/source/backup-source-processor.interface.ts
|
||||
- api/src/unraid-api/graph/resolvers/backup/destination/backup-destination-processor.interface.ts
|
||||
- api/src/unraid-api/graph/resolvers/rclone/rclone-api.service.ts (streaming job service)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup.model.ts (backup config models)
|
||||
</relevantFiles>
|
||||
</Climb>
|
||||
|
||||
## Overview
|
||||
|
||||
This task implements a clean backup system architecture using separate source and destination processors with support for both streaming and non-streaming backup workflows. Since this is a new system, we can implement the optimal design without backward compatibility concerns.
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
### Already Implemented
|
||||
- **FlashSourceProcessor**: Supports streaming via tar command generation for git history inclusion
|
||||
- **RCloneDestinationProcessor**: Handles both streaming and regular uploads to RClone remotes
|
||||
- **RawSourceProcessor**: Basic implementation without streaming support
|
||||
|
||||
### Architecture Pattern
|
||||
The processor pattern separates:
|
||||
1. **Source Processors**: Handle data preparation, validation, and streaming command generation
|
||||
2. **Destination Processors**: Handle upload/transfer logic with streaming support
|
||||
3. **Backup Service**: Orchestrates the flow between source and destination processors
|
||||
|
||||
## Requirements
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
#### Backup Config Simplification
|
||||
- **Main Backup Config** should only contain:
|
||||
- Job ID and name
|
||||
- Cron schedule
|
||||
- Enabled/disabled status
|
||||
- Created/updated timestamps
|
||||
- Last run metadata (status, timestamp)
|
||||
- **Source Config** should contain all source-specific configuration
|
||||
- **Destination Config** should contain all destination-specific configuration
|
||||
- Remove redundant fields from main config (remoteName, destinationPath, rcloneOptions, etc.)
|
||||
|
||||
#### Source Processor Interface
|
||||
- All source processors must implement consistent validation
|
||||
- Streaming-capable sources should generate stream commands (command + args)
|
||||
- Non-streaming sources should provide direct file/directory paths
|
||||
- Metadata should include streaming capability flags
|
||||
|
||||
#### Destination Processor Interface
|
||||
- Support both streaming and non-streaming inputs
|
||||
- Handle progress reporting and error handling consistently
|
||||
- Provide cleanup capabilities for failed transfers
|
||||
|
||||
#### Backup Service Integration
|
||||
- Automatically detect streaming vs non-streaming workflows
|
||||
- Route streaming backups through streaming job service
|
||||
- Route regular backups through standard backup service
|
||||
- Maintain consistent job tracking and status reporting
|
||||
|
||||
### Technical Requirements
|
||||
|
||||
#### Simplified Backup Config Structure
|
||||
```typescript
|
||||
interface BackupJobConfig {
|
||||
id: string
|
||||
name: string
|
||||
schedule: string
|
||||
enabled: boolean
|
||||
sourceType: SourceType
|
||||
destinationType: DestinationType
|
||||
sourceConfig: SourceConfig // Type varies by sourceType
|
||||
destinationConfig: DestinationConfig // Type varies by destinationType
|
||||
createdAt: string
|
||||
updatedAt: string
|
||||
lastRunAt?: string
|
||||
lastRunStatus?: string
|
||||
currentJobId?: string
|
||||
}
|
||||
```
|
||||
|
||||
#### Streaming Detection Logic
|
||||
```typescript
|
||||
if (sourceResult.streamCommand && destinationConfig.useStreaming) {
|
||||
// Use streaming workflow
|
||||
await streamingJobService.execute(sourceResult, destinationConfig)
|
||||
} else {
|
||||
// Use regular workflow
|
||||
await backupService.execute(sourceResult.outputPath, destinationConfig)
|
||||
}
|
||||
```
|
||||
|
||||
#### Error Handling
|
||||
- Consistent error propagation between processors
|
||||
- Cleanup coordination between source and destination
|
||||
- Timeout handling for both streaming and non-streaming operations
|
||||
|
||||
#### Progress Reporting
|
||||
- Unified progress interface across streaming and non-streaming
|
||||
- Real-time status updates for long-running operations
|
||||
- Metadata preservation throughout the pipeline
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Backup Config Model Refactoring
|
||||
|
||||
#### Current Issues
|
||||
- Main config contains source-specific fields (sourceConfig with nested type-specific configs)
|
||||
- Main config contains destination-specific fields (remoteName, destinationPath, rcloneOptions)
|
||||
- Mixed concerns make the config complex and hard to extend
|
||||
|
||||
#### New Structure
|
||||
```typescript
|
||||
// Simplified main config
|
||||
interface BackupJobConfig {
|
||||
id: string
|
||||
name: string
|
||||
schedule: string
|
||||
enabled: boolean
|
||||
sourceType: SourceType
|
||||
destinationType: DestinationType
|
||||
sourceConfig: FlashSourceConfig | RawSourceConfig | ZfsSourceConfig | ScriptSourceConfig
|
||||
destinationConfig: RCloneDestinationConfig | LocalDestinationConfig
|
||||
createdAt: string
|
||||
updatedAt: string
|
||||
lastRunAt?: string
|
||||
lastRunStatus?: string
|
||||
currentJobId?: string
|
||||
}
|
||||
|
||||
// Source configs contain all source-specific settings
|
||||
interface RCloneDestinationConfig {
|
||||
remoteName: string
|
||||
remotePath: string
|
||||
transferOptions?: Record<string, unknown>
|
||||
useStreaming?: boolean
|
||||
timeout: number
|
||||
cleanupOnFailure: boolean
|
||||
}
|
||||
```
|
||||
|
||||
### Source Processor Updates Needed
|
||||
|
||||
#### Raw Source Processor Enhancements
|
||||
- Add streaming command generation for tar-based compression
|
||||
- Implement include/exclude pattern handling in stream commands
|
||||
- Add metadata flags for streaming capability
|
||||
- Support both streaming and non-streaming modes
|
||||
|
||||
#### ZFS Source Processor (Future)
|
||||
- Will need streaming support for ZFS snapshot transfers
|
||||
- Should generate appropriate zfs send commands
|
||||
- Handle incremental vs full backup streaming
|
||||
|
||||
#### Script Source Processor (Future)
|
||||
- Execute custom scripts and stream their output
|
||||
- Handle script validation and execution environment
|
||||
- Support both file output and streaming output modes
|
||||
|
||||
### Backup Service Orchestration
|
||||
|
||||
#### Workflow Detection
|
||||
```typescript
|
||||
async executeBackup(config: BackupJobConfig): Promise<BackupResult> {
|
||||
const sourceProcessor = this.getSourceProcessor(config.sourceType)
|
||||
const destinationProcessor = this.getDestinationProcessor(config.destinationType)
|
||||
|
||||
const sourceResult = await sourceProcessor.execute(config.sourceConfig)
|
||||
|
||||
if (sourceResult.streamCommand && destinationProcessor.supportsStreaming) {
|
||||
return this.executeStreamingBackup(sourceResult, config.destinationConfig)
|
||||
} else {
|
||||
return this.executeRegularBackup(sourceResult, config.destinationConfig)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Job Management Integration
|
||||
- Update backup-config.service.ts to use processor pattern
|
||||
- Maintain existing cron scheduling functionality
|
||||
- Preserve job status tracking and metadata storage
|
||||
- Handle processor-specific cleanup requirements
|
||||
|
||||
### Interface Standardization
|
||||
|
||||
#### BackupSourceResult Enhancement
|
||||
```typescript
|
||||
interface BackupSourceResult {
|
||||
success: boolean
|
||||
outputPath?: string
|
||||
streamPath?: string // For streaming sources
|
||||
streamCommand?: string
|
||||
streamArgs?: string[]
|
||||
metadata: Record<string, unknown>
|
||||
cleanupRequired?: boolean
|
||||
error?: string
|
||||
}
|
||||
```
|
||||
|
||||
#### BackupDestinationConfig Enhancement
|
||||
```typescript
|
||||
interface BackupDestinationConfig {
|
||||
timeout: number
|
||||
cleanupOnFailure: boolean
|
||||
useStreaming?: boolean
|
||||
supportsStreaming?: boolean
|
||||
// destination-specific config
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Core Implementation Tasks
|
||||
1. **Refactor Backup Config Models** - Simplify main config and move specific settings to source/destination configs
|
||||
2. **Update Raw Source Processor** - Add streaming support with tar command generation
|
||||
3. **Create Backup Orchestration Service** - Implement workflow detection and processor coordination
|
||||
4. **Update Backup Config Service** - Integrate with new processor pattern and simplified config structure
|
||||
5. **Update GraphQL Schema** - Reflect new config structure in API
|
||||
6. **Add Comprehensive Testing** - Unit and integration tests for all workflows
|
||||
|
||||
### Backup Config Refactoring
|
||||
- Remove source-specific fields from main BackupJobConfig
|
||||
- Remove destination-specific fields from main BackupJobConfig
|
||||
- Create proper TypeScript union types for sourceConfig and destinationConfig
|
||||
- Update GraphQL input/output types to match new structure
|
||||
- Migrate any existing config data to new structure
|
||||
|
||||
### Backup Orchestration Service
|
||||
Create a new service that:
|
||||
- Manages source and destination processor instances
|
||||
- Implements streaming vs non-streaming workflow detection
|
||||
- Handles job execution coordination
|
||||
- Manages cleanup and error handling
|
||||
- Provides unified progress reporting
|
||||
|
||||
### Updated Raw Source Processor
|
||||
Enhance to support:
|
||||
- Streaming tar command generation similar to Flash processor
|
||||
- Include/exclude pattern handling in tar commands
|
||||
- Metadata flags indicating streaming capability
|
||||
- Both streaming and direct file path modes
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Test each processor independently with mock dependencies
|
||||
- Validate streaming command generation
|
||||
- Test error handling and cleanup scenarios
|
||||
- Verify metadata preservation
|
||||
- Test config model validation and transformation
|
||||
|
||||
### Integration Tests
|
||||
- Test complete backup workflows (source → destination)
|
||||
- Validate streaming vs non-streaming path selection
|
||||
- Test job management and status tracking
|
||||
- Verify cleanup coordination
|
||||
- Test GraphQL API with new config structure
|
||||
|
||||
### Edge Cases
|
||||
- Network failures during streaming uploads
|
||||
- Source preparation failures with cleanup requirements
|
||||
- Mixed streaming/non-streaming configurations
|
||||
- Large file handling and timeout scenarios
|
||||
- Invalid config combinations
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Functional Success
|
||||
- Flash backups use streaming when appropriate
|
||||
- Raw backups can use either streaming or non-streaming based on configuration
|
||||
- Job scheduling and status tracking work correctly
|
||||
- All backup types execute successfully
|
||||
- Clean separation of concerns in config structure
|
||||
|
||||
### Technical Success
|
||||
- Clean separation between source and destination concerns
|
||||
- Consistent error handling and cleanup across all processors
|
||||
- Efficient streaming for large backups
|
||||
- Maintainable and extensible processor architecture
|
||||
- Simplified and logical config structure
|
||||
|
||||
### Performance Success
|
||||
- Streaming backups show improved memory usage for large datasets
|
||||
- Proper timeout handling prevents hung jobs
|
||||
- Resource cleanup prevents memory leaks
|
||||
- Fast execution for both streaming and non-streaming workflows
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Additional Source Types
|
||||
- ZFS snapshot streaming
|
||||
- Database dump streaming
|
||||
- Custom script output streaming
|
||||
- Docker container backup streaming
|
||||
|
||||
### Enhanced Destination Support
|
||||
- Multiple destination targets
|
||||
- Destination validation and health checks
|
||||
- Bandwidth throttling and QoS
|
||||
- Encryption at destination level
|
||||
|
||||
### Monitoring and Observability
|
||||
- Detailed metrics for streaming vs non-streaming performance
|
||||
- Progress tracking granularity improvements
|
||||
- Error categorization and alerting
|
||||
- Resource usage monitoring per backup type
|
||||
@@ -1,132 +0,0 @@
|
||||
{
|
||||
"Climb": "r5N8",
|
||||
"moves": [
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Examine current backup config structure and interfaces",
|
||||
"details": "Review backup.model.ts, backup-config.service.ts, and GraphQL schema to understand current structure. Document what needs to be changed for the config simplification."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Create new backup config interfaces",
|
||||
"details": "Define simplified BackupJobConfig interface with only job-level fields (id, name, schedule, enabled, timestamps). Create union types for sourceConfig and destinationConfig. Update backup.model.ts with new interfaces.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Update source processor interfaces for streaming support",
|
||||
"details": "Enhance BackupSourceResult interface to include streamCommand, streamArgs, and streaming capability metadata. Update backup-source-processor.interface.ts to support both streaming and non-streaming workflows."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Update destination processor interfaces",
|
||||
"details": "Enhance BackupDestinationConfig interface with useStreaming and supportsStreaming flags. Update backup-destination-processor.interface.ts to handle both streaming and regular backup inputs."
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Add streaming support to Raw Source Processor",
|
||||
"details": "Update raw-source-processor.service.ts to generate tar commands for streaming backups. Add include/exclude pattern handling in tar command generation. Add metadata flags for streaming capability. Maintain support for direct file path mode.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Create Backup Orchestration Service",
|
||||
"details": "Create new backup-orchestration.service.ts that manages source and destination processor instances. Implement workflow detection logic (streaming vs non-streaming). Handle job execution coordination between processors."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Implement streaming workflow execution in orchestration service",
|
||||
"details": "Add executeStreamingBackup method that coordinates source streaming commands with destination streaming uploads. Handle progress reporting, error handling, and cleanup coordination for streaming workflows."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Implement regular workflow execution in orchestration service",
|
||||
"details": "Add executeRegularBackup method for non-streaming workflows. Handle file-based transfers from source output to destination. Implement consistent error handling and cleanup."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update backup-config.service.ts to use new config structure",
|
||||
"details": "Refactor createBackupJobConfig and updateBackupJobConfig methods to work with simplified config structure. Remove handling of source/destination specific fields from main config. Update validation logic.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Integrate orchestration service into backup-config.service.ts",
|
||||
"details": "Replace direct rclone service calls in executeBackupJob with orchestration service. Update job execution to use processor pattern. Maintain existing cron scheduling and job tracking functionality."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update GraphQL schema for new config structure",
|
||||
"details": "Update backup GraphQL types to reflect simplified BackupJobConfig structure. Create separate input types for different source and destination configs. Update mutations and queries to handle new structure."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update backup JSON forms configuration",
|
||||
"details": "Refactor backup-jsonforms-config.ts to remove destination-specific fields (remoteName, destinationPath, rcloneOptions) from basic config. Create separate destination config section. Reorganize form steps to separate job config, source config, and destination config clearly.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Create source config factory/registry",
|
||||
"details": "Create a service to manage source processor instances by type. Implement getSourceProcessor method that returns appropriate processor based on sourceType. Handle processor dependency injection and lifecycle."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Create destination config factory/registry",
|
||||
"details": "Create a service to manage destination processor instances by type. Implement getDestinationProcessor method that returns appropriate processor based on destinationType. Handle processor dependency injection and lifecycle."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Add comprehensive error handling and cleanup coordination",
|
||||
"details": "Implement consistent error propagation between source and destination processors. Add cleanup coordination when either source or destination fails. Handle timeout scenarios for both streaming and non-streaming operations.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Add progress reporting interface",
|
||||
"details": "Create unified progress reporting interface that works for both streaming and non-streaming workflows. Implement real-time status updates. Ensure metadata preservation throughout the pipeline."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Write unit tests for Raw Source Processor streaming",
|
||||
"details": "Test streaming command generation with various include/exclude patterns. Test metadata flags and streaming capability detection. Test error handling and cleanup scenarios. Mock dependencies appropriately."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Write unit tests for Backup Orchestration Service",
|
||||
"details": "Test workflow detection logic (streaming vs non-streaming). Test source and destination processor coordination. Test error handling and cleanup coordination. Mock all processor dependencies."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Write unit tests for updated backup-config.service.ts",
|
||||
"details": "Test config creation and updates with new structure. Test validation of source and destination configs. Test job execution with orchestration service. Test cron scheduling functionality."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Write integration tests for complete backup workflows",
|
||||
"details": "Test Flash source with RClone destination (streaming). Test Raw source with RClone destination (both streaming and non-streaming). Test job management and status tracking. Test cleanup coordination.",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Test edge cases and error scenarios",
|
||||
"details": "Test network failures during streaming uploads. Test source preparation failures with cleanup requirements. Test invalid config combinations. Test large file handling and timeout scenarios."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Update existing backup configs to new structure",
|
||||
"details": "Create migration logic to convert any existing backup configs to new simplified structure. Move source-specific and destination-specific fields to appropriate sub-configs. Test migration with existing data."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Performance testing and optimization",
|
||||
"details": "Test streaming vs non-streaming performance with large datasets. Verify memory usage improvements for streaming backups. Test timeout handling and resource cleanup. Benchmark execution times for different backup types."
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Documentation and final validation",
|
||||
"details": "Document new config structure and processor architecture. Create examples for different backup configurations. Validate all backup types work correctly. Ensure clean separation of concerns achieved.",
|
||||
"rest": true
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,184 +0,0 @@
|
||||
**STARTFILE x7K9-climb.md**
|
||||
<Climb>
|
||||
<header>
|
||||
<id>x7K9</id>
|
||||
<type>feature</type>
|
||||
<description>Enhanced Backup Job Management System with disable/enable controls, manual triggering, and real-time progress monitoring</description>
|
||||
</header>
|
||||
<newDependencies>No new external dependencies expected - leveraging existing GraphQL subscriptions infrastructure</newDependencies>
|
||||
<prerequisiteChanges>None - building on existing backup system architecture</prerequisiteChanges>
|
||||
<relevantFiles>
|
||||
- web/components/Backup/BackupJobConfig.vue (main UI component)
|
||||
- web/components/Backup/backup-jobs.query.ts (GraphQL queries/mutations)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup.resolver.ts (GraphQL resolver)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup-config.service.ts (business logic)
|
||||
- api/src/unraid-api/graph/resolvers/backup/backup.model.ts (GraphQL schema types)
|
||||
</relevantFiles>
|
||||
|
||||
## Feature Overview
|
||||
Enhance the existing backup job management system to provide better control and monitoring capabilities for users managing their backup operations.
|
||||
|
||||
## Purpose Statement
|
||||
Users need granular control over their backup jobs with the ability to enable/disable individual jobs, manually trigger scheduled jobs on-demand, and monitor real-time progress of running backup operations.
|
||||
|
||||
## Problem Being Solved
|
||||
- Users cannot easily disable/enable individual backup jobs without deleting them
|
||||
- No way to manually trigger a scheduled backup job outside its schedule
|
||||
- No real-time visibility into backup job progress once initiated
|
||||
- Limited feedback on current backup operation status
|
||||
|
||||
## Success Metrics
|
||||
- Users can toggle backup jobs on/off without losing configuration
|
||||
- Users can manually trigger any configured backup job
|
||||
- Real-time progress updates for active backup operations
|
||||
- Improved user experience with immediate feedback
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### Job Control
|
||||
- Toggle individual backup jobs enabled/disabled state
|
||||
- Manual trigger functionality for any configured backup job
|
||||
- Preserve all job configuration when disabling
|
||||
- Visual indicators for job state (enabled/disabled/running)
|
||||
|
||||
### Progress Monitoring
|
||||
- Real-time subscription for backup job progress
|
||||
- Display progress percentage, speed, ETA, and transferred data
|
||||
- Show currently running jobs in the UI
|
||||
- Update job status in real-time without page refresh
|
||||
|
||||
### UI Enhancements
|
||||
- Add enable/disable toggle controls to job cards
|
||||
- Add "Run Now" button for manual triggering
|
||||
- Progress indicators and status updates
|
||||
- Better visual feedback for job states
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
### GraphQL API
|
||||
- Add mutation for enabling/disabling backup job configs
|
||||
- Add mutation for manually triggering backup jobs by config ID
|
||||
- Add subscription for real-time backup job progress updates
|
||||
- Extend existing BackupJob type with progress fields
|
||||
|
||||
### Backend Services
|
||||
- Enhance BackupConfigService with enable/disable functionality
|
||||
- Add manual trigger capability that uses existing job configs
|
||||
- Implement subscription resolver for real-time updates
|
||||
- Ensure proper error handling and status reporting
|
||||
|
||||
### Frontend Implementation
|
||||
- Add toggle controls to BackupJobConfig.vue
|
||||
- Implement manual trigger buttons
|
||||
- Subscribe to progress updates and display in UI
|
||||
- Handle loading states and error conditions
|
||||
|
||||
## User Flow
|
||||
|
||||
### Disabling a Job
|
||||
1. User views backup job list
|
||||
2. User clicks toggle to disable a job
|
||||
3. Job status updates immediately
|
||||
4. Scheduled execution stops, configuration preserved
|
||||
|
||||
### Manual Triggering
|
||||
1. User clicks "Run Now" on any configured job
|
||||
2. System validates job configuration
|
||||
3. Backup initiates immediately
|
||||
4. User sees real-time progress updates
|
||||
|
||||
### Progress Monitoring
|
||||
1. User initiates backup (scheduled or manual)
|
||||
2. Progress subscription automatically activates
|
||||
3. Real-time updates show in UI
|
||||
4. Completion status updates when job finishes
|
||||
|
||||
## API Specifications
|
||||
|
||||
### New Mutations (Nested Pattern)
|
||||
Following the established pattern from ArrayMutations, create BackupMutations:
|
||||
```graphql
|
||||
type BackupMutations {
|
||||
toggleJobConfig(id: String!, enabled: Boolean!): BackupJobConfig
|
||||
triggerJob(configId: String!): BackupStatus
|
||||
}
|
||||
```
|
||||
|
||||
### Implementation Structure
|
||||
- Create `BackupMutationsResolver` class similar to `ArrayMutationsResolver`
|
||||
- Use `@ResolveField()` decorators instead of `@Mutation()`
|
||||
- Add appropriate `@UsePermissions()` decorators
|
||||
- Group all backup-related mutations under `BackupMutations` type
|
||||
|
||||
### New Subscription
|
||||
```graphql
|
||||
backupJobProgress(jobId: String): BackupJob
|
||||
```
|
||||
|
||||
### Enhanced Types
|
||||
- Extend BackupJob with progress percentage
|
||||
- Add jobConfigId reference to running jobs
|
||||
- Include more detailed status information
|
||||
|
||||
### Frontend GraphQL Usage
|
||||
```graphql
|
||||
mutation ToggleBackupJob($id: String!, $enabled: Boolean!) {
|
||||
backup {
|
||||
toggleJobConfig(id: $id, enabled: $enabled) {
|
||||
id
|
||||
enabled
|
||||
updatedAt
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
mutation TriggerBackupJob($configId: String!) {
|
||||
backup {
|
||||
triggerJob(configId: $configId) {
|
||||
status
|
||||
jobId
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Considerations
|
||||
|
||||
### Real-time Updates
|
||||
- Use existing GraphQL subscription infrastructure
|
||||
- Efficient polling of rclone API for progress data
|
||||
- Proper cleanup of subscriptions when jobs complete
|
||||
|
||||
### State Management
|
||||
- Update job configs atomically
|
||||
- Handle concurrent operations gracefully
|
||||
- Maintain consistency between scheduled and manual executions
|
||||
|
||||
### Error Handling
|
||||
- Validate job configs before manual triggering
|
||||
- Graceful degradation if progress updates fail
|
||||
- Clear error messages for failed operations
|
||||
|
||||
## Testing Approach
|
||||
|
||||
### Test Cases
|
||||
- Toggle job enabled/disabled state
|
||||
- Manual trigger of backup jobs
|
||||
- Real-time progress subscription functionality
|
||||
- Error handling for invalid operations
|
||||
- Concurrent job execution scenarios
|
||||
|
||||
### Acceptance Criteria
|
||||
- Jobs can be disabled/enabled without data loss
|
||||
- Manual triggers work for all valid job configurations
|
||||
- Progress updates are accurate and timely
|
||||
- UI responds appropriately to all state changes
|
||||
- No memory leaks from subscription management
|
||||
|
||||
## Future Considerations
|
||||
- Job scheduling modification (change cron without recreate)
|
||||
- Backup job templates and bulk operations
|
||||
- Advanced progress details (file-level progress)
|
||||
- Job history and logging improvements
|
||||
</Climb>
|
||||
**ENDFILE**
|
||||
@@ -1,63 +0,0 @@
|
||||
{
|
||||
"Climb": "x7K9",
|
||||
"moves": [
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Create BackupMutations GraphQL type and resolver structure",
|
||||
"details": "Add BackupMutations type to backup.model.ts, create backup-mutations.resolver.ts file, and move existing mutations (createBackupJobConfig, updateBackupJobConfig, deleteBackupJobConfig, initiateBackup) from BackupResolver to the new BackupMutationsResolver following the ArrayMutationsResolver pattern"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Implement toggleJobConfig mutation",
|
||||
"details": "Add toggleJobConfig resolver method with proper permissions and update BackupConfigService to handle enable/disable functionality"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Implement triggerJob mutation",
|
||||
"details": "Add triggerJob resolver method that manually triggers a backup job using existing config, with validation and error handling"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Add backupJobProgress subscription",
|
||||
"details": "Create GraphQL subscription resolver for real-time backup job progress updates using existing rclone API polling",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Enhance BackupJob type with progress fields",
|
||||
"details": "Add progress percentage, configId reference, and detailed status fields to BackupJob model"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Update frontend GraphQL queries and mutations",
|
||||
"details": "Add new mutations and subscription to backup-jobs.query.ts following the nested mutation pattern"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Add toggle controls to BackupJobConfig.vue",
|
||||
"details": "Add enable/disable toggle switches to each job card with proper state management and error handling"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Add manual trigger buttons to BackupJobConfig.vue",
|
||||
"details": "Add 'Run Now' buttons with loading states and trigger the new mutation",
|
||||
"rest": true
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Implement progress monitoring in the UI",
|
||||
"details": "Subscribe to backup job progress and display real-time updates in the job cards with progress bars and status"
|
||||
},
|
||||
{
|
||||
"status": "done",
|
||||
"description": "Add visual indicators for job states",
|
||||
"details": "Enhance job cards with better status indicators for enabled/disabled/running states and improve overall UX"
|
||||
},
|
||||
{
|
||||
"status": "todo",
|
||||
"description": "Test integration and error handling",
|
||||
"details": "Test all functionality including edge cases, error scenarios, and subscription cleanup",
|
||||
"rest": true
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user